-
Notifications
You must be signed in to change notification settings - Fork 751
[GOBBLIN-2223] Optimise writing of serialised Work Unit to File system #4133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,7 +19,6 @@ | |
import java.io.DataInput; | ||
import java.io.DataOutput; | ||
import java.io.IOException; | ||
import java.nio.charset.StandardCharsets; | ||
|
||
|
||
/** | ||
|
@@ -31,20 +30,21 @@ public class TextSerializer { | |
* Serialize a String using the same logic as a Hadoop Text object | ||
*/ | ||
public static void writeStringAsText(DataOutput stream, String str) throws IOException { | ||
byte[] utf8Encoded = str.getBytes(StandardCharsets.UTF_8); | ||
writeVLong(stream, utf8Encoded.length); | ||
stream.write(utf8Encoded); | ||
writeVLong(stream, str.length()); | ||
stream.writeBytes(str); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this is a good suggestion - https://www.cs.helsinki.fi/group/boi2016/doc/java/api/java/io/DataOutput.html#writeBytes-java.lang.String-
|
||
} | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The method name suggests Hadoop Text compatibility, but the implementation is no longer compatible with Hadoop's Text serialization format, which uses UTF-8 byte encoding. This could break interoperability with Hadoop systems. Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||
|
||
/** | ||
* Deserialize a Hadoop Text object into a String | ||
*/ | ||
public static String readTextAsString(DataInput in) throws IOException { | ||
int bufLen = (int)readVLong(in); | ||
byte[] buf = new byte[bufLen]; | ||
in.readFully(buf); | ||
int bufLen = (int) readVLong(in); | ||
StringBuilder sb = new StringBuilder(); | ||
|
||
return new String(buf, StandardCharsets.UTF_8); | ||
for (int i = 0; i < bufLen; i++) { | ||
sb.append((char) in.readByte()); | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Casting a byte directly to char will produce incorrect results for multi-byte UTF-8 characters. This approach only works correctly for ASCII characters (0-127) and will corrupt Unicode text. Copilot uses AI. Check for mistakes. Positive FeedbackNegative Feedback |
||
} | ||
return sb.toString(); | ||
} | ||
|
||
/** | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using
str.length()
for the length will cause deserialization errors for multi-byte UTF-8 characters. The length should represent the number of bytes, not the number of characters. Multi-byte UTF-8 characters will have different byte lengths than character counts.Copilot uses AI. Check for mistakes.