-
Notifications
You must be signed in to change notification settings - Fork 278
fix image cache overhead #930
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,17 +5,19 @@ | |
|
||
|
||
def tensor2bytes(t: torch.Tensor): | ||
# t = t.cpu().numpy().tobytes() | ||
# return t | ||
buf = BytesIO() | ||
torch.save(t.detach().cpu(), buf) | ||
buf.seek(0) | ||
return buf.read() | ||
|
||
|
||
def bytes2tensor(b): | ||
# return torch.from_numpy(np.frombuffer(b, dtype=np.float16)).cuda() | ||
return torch.load(BytesIO(b)) | ||
if t.dtype == torch.float32: | ||
t = t.cpu().numpy().tobytes() | ||
else: | ||
t = t.cpu().to(torch.uint16).numpy().tobytes() | ||
return t | ||
|
||
|
||
def bytes2tensor(b, torch_dtype=torch.bfloat16): | ||
if torch_dtype == torch.float32: | ||
arr_loaded = np.frombuffer(b, dtype=np.float32) | ||
else: | ||
arr_loaded = np.frombuffer(b, dtype=np.uint16) | ||
return torch.from_numpy(arr_loaded).to(torch_dtype) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The if torch_dtype == torch.float32:
arr_loaded = np.frombuffer(b, dtype=np.float32)
return torch.from_numpy(arr_loaded)
elif torch_dtype == torch.float16 or torch_dtype == torch.bfloat16:
arr_loaded_uint16 = np.frombuffer(b, dtype=np.uint16)
return torch.from_numpy(arr_loaded_uint16.copy()).view(torch_dtype)
else:
raise TypeError(f"Unsupported torch_dtype for bytes2tensor: {torch_dtype}. This function is optimized for float32, float16, bfloat16.") |
||
|
||
|
||
def create_shm(name, data): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For non-
float32
types, thetensor2bytes
function usest.cpu().to(torch.uint16)
, which performs a value cast instead of a bit-wise reinterpretation, leading to data loss. Uset.cpu().contiguous().view(torch.uint16)
to preserve the bit representation. Explicitly check fortorch.float16
andtorch.bfloat16
.