Skip to content

tinycompress: fix compress_read API #28

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

quic-ajender
Copy link

In blocking mode:
read is blocked until it reads some bytes
In Non-blocking mode:
read as much as available bytes, if available.

In blocking mode:
  read is blocked until it reads some bytes
In Non-blocking mode:
  read as much as available bytes, if available.
@charleskeepax
Copy link
Contributor

Is this really fixing anything? It looks like this is just changing the behaviour from blocking until size bytes are read to blocking until >fragment_size bytes are read and it doesn't really explain why the new behaviour is superior. In many ways it feels like the old behaviour is more useful, you can always set size = fragment_size when you call the function if you want to emulate this new behaviour?

@quic-ajender
Copy link
Author

quic-ajender commented Jun 19, 2025

The compress_read API reads compressed audio data, specifically one compressed frame at a time (although not stated explicitly). Typically, a fixed number of PCM samples X are compressed into a variable number of bytes Y, forming a single compressed frame. However, the size of the resulting compressed frame is not predictable by the client.

In the current implementation, compress_read reads a stream of compressed data, which may include a partial compressed frame. This behavior makes it difficult for the client to determine the boundaries of individual compressed frames, which can lead to issues in processing or decoding the data correctly.

It may be necessary to update the API documentation to clearly reflect this behavior and its implications for clients relying on frame boundary detection.

@charleskeepax
Copy link
Contributor

I mean I really don't think one should be using the amount of bytes read to determine the boundaries of compressed frames, that seems super fragile.

@quic-ajender
Copy link
Author

I think the goal of the API is to read a single compressed frame per read call. However, with the existing compress_read implementation, it tries to read a given number of bytes (size). How can the user predict the size of a compressed frame?

@charleskeepax
Copy link
Contributor

The API wasn't really implemented with the intention of pulling a single frame per call, the expectation was really that frames would be reconstructable from the data format. Relying on a the amount of data read to work out what constitutes a frame seems like bad system design, however, if you guys require such functionality I think I would vote for adding a new read single frame function for this purpose. This makes it more clear what is happening and doesn't impact systems using the existing functionality.

@charleskeepax
Copy link
Contributor

Or perhaps an additional mode setting call like compress_nonblock would make more sense than a separate read looking again.

@vinodkoul
Copy link
Contributor

The API wasn't really implemented with the intention of pulling a single frame per call, the expectation was really that frames would be reconstructable from the data format. Relying on a the amount of data read to work out what constitutes a frame seems like bad system design, however, if you guys require such functionality I think I would vote for adding a new read single frame function for this purpose. This makes it more clear what is happening and doesn't impact systems using the existing functionality.

@plbossart
Well at last the initial design was to have compress read be performed at the frame boundaries, that way framing information can be captured and stored on the file.
For different usage when we dont actually record comoressed audio, but record a debug data, this maynot be required...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants