-
Notifications
You must be signed in to change notification settings - Fork 25
tinycompress: fix compress_read API #28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
In blocking mode: read is blocked until it reads some bytes In Non-blocking mode: read as much as available bytes, if available.
Is this really fixing anything? It looks like this is just changing the behaviour from blocking until size bytes are read to blocking until >fragment_size bytes are read and it doesn't really explain why the new behaviour is superior. In many ways it feels like the old behaviour is more useful, you can always set size = fragment_size when you call the function if you want to emulate this new behaviour? |
The In the current implementation, compress_read reads a stream of compressed data, which may include a partial compressed frame. This behavior makes it difficult for the client to determine the boundaries of individual compressed frames, which can lead to issues in processing or decoding the data correctly. It may be necessary to update the API documentation to clearly reflect this behavior and its implications for clients relying on frame boundary detection. |
I mean I really don't think one should be using the amount of bytes read to determine the boundaries of compressed frames, that seems super fragile. |
I think the goal of the API is to read a single compressed frame per read call. However, with the existing |
The API wasn't really implemented with the intention of pulling a single frame per call, the expectation was really that frames would be reconstructable from the data format. Relying on a the amount of data read to work out what constitutes a frame seems like bad system design, however, if you guys require such functionality I think I would vote for adding a new read single frame function for this purpose. This makes it more clear what is happening and doesn't impact systems using the existing functionality. |
Or perhaps an additional mode setting call like compress_nonblock would make more sense than a separate read looking again. |
@plbossart |
In blocking mode:
read is blocked until it reads some bytes
In Non-blocking mode:
read as much as available bytes, if available.