Skip to content

S3 put_object should accept a block to facillitate chunked writes #3142

Open
@ezekg

Description

@ezekg

Describe the feature

After using get_object's chunked read, I assumed put_object similarly supported chunked writing:

client.put_object(bucket: blob.bucket, key: blob.key) do |buffer|
  while chunk = blob.read(16 * 256)
    buffer << chunk
  end
end

For reference, get_object supports this:

client.get_object(bucket: blob.bucket, key: blob.key) do |chunk|
  buffer << chunk
end

But this isn't currently supported and results in an empty object, since the block is ignored.

Use Case

I want to write an IO to S3 while maintaining a low memory footprint, while being explicit with how much I read for each chunk. I do not want to rely on S3 internals to choose how large my chunks should be.

Proposed Solution

Similarly to get_object, allow put_object to accept a block, yielding the internal request body.

Other Information

No response

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

SDK version used

1.113.0

Environment details (OS name and version, etc.)

Linux 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature-requestA feature should be added or improved.needs-major-versionCan only be considered for the next major releasep2This is a standard priority issue

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions