Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.nio.charset.StandardCharsets;


/**
Expand All @@ -31,20 +30,21 @@ public class TextSerializer {
* Serialize a String using the same logic as a Hadoop Text object
*/
public static void writeStringAsText(DataOutput stream, String str) throws IOException {
byte[] utf8Encoded = str.getBytes(StandardCharsets.UTF_8);
writeVLong(stream, utf8Encoded.length);
stream.write(utf8Encoded);
writeVLong(stream, str.length());
stream.writeBytes(str);
Comment on lines +39 to +40
Copy link
Preview

Copilot AI Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using str.length() for the length will cause deserialization errors for multi-byte UTF-8 characters. The length should represent the number of bytes, not the number of characters. Multi-byte UTF-8 characters will have different byte lengths than character counts.

Copilot uses AI. Check for mistakes.

Copy link
Preview

Copilot AI Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DataOutput.writeBytes() only writes the low 8 bits of each character, which will corrupt any characters outside the ASCII range (0-127). This breaks Unicode support that was previously handled by UTF-8 encoding.

Copilot uses AI. Check for mistakes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a good suggestion - https://www.cs.helsinki.fi/group/boi2016/doc/java/api/java/io/DataOutput.html#writeBytes-java.lang.String-
Should we have some handling for this as well ? @thisisArjit

for (int i = 0; i < str.length(); i++) {
    if (str.charAt(i) > 0x7F) {
        throw new IllegalArgumentException("Non-ASCII character detected.");
    }
}
writeVLong(stream, str.length());
stream.writeBytes(str); // writes 1 byte per character
 

}
Copy link
Preview

Copilot AI Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The method name suggests Hadoop Text compatibility, but the implementation is no longer compatible with Hadoop's Text serialization format, which uses UTF-8 byte encoding. This could break interoperability with Hadoop systems.

Copilot uses AI. Check for mistakes.


/**
* Deserialize a Hadoop Text object into a String
*/
public static String readTextAsString(DataInput in) throws IOException {
int bufLen = (int)readVLong(in);
byte[] buf = new byte[bufLen];
in.readFully(buf);
int bufLen = (int) readVLong(in);
StringBuilder sb = new StringBuilder();

return new String(buf, StandardCharsets.UTF_8);
for (int i = 0; i < bufLen; i++) {
sb.append((char) in.readByte());
Copy link
Preview

Copilot AI Sep 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Casting a byte directly to char will produce incorrect results for multi-byte UTF-8 characters. This approach only works correctly for ASCII characters (0-127) and will corrupt Unicode text.

Copilot uses AI. Check for mistakes.

}
return sb.toString();
}

/**
Expand Down
Loading