Conversation
There was a problem hiding this comment.
Pull request overview
This pull request replaces the external dusty HTTP library with a custom HTTP/1.1 client implementation that communicates directly with the Docker Engine over Unix domain sockets using std.net.connectUnixSocket. This eliminates both the dusty and zio dependencies.
Changes:
- Implemented a custom HTTP/1.1 client in
docker_client.zigwith buffered stream reading, header parsing, and chunked transfer encoding support - Removed all
zio.Runtimeinitialization code from tests, examples, and documentation - Updated
wait.zigto usestd.http.Clientfor the HTTP wait strategy instead of dusty - Removed external dependencies from
build.zig.zonandbuild.zig
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| src/docker_client.zig | Added custom HTTP/1.1 implementation with HttpReader, request builder, response parser, and chunked encoding decoder; removed dusty client dependency |
| src/wait.zig | Replaced dusty client with std.http.Client for HTTP wait strategy |
| src/integration_test.zig | Removed zio runtime initialization from all tests |
| src/root.zig | Updated documentation to reflect removal of zio dependency requirement |
| examples/basic.zig | Removed zio runtime initialization; updated to use std.http.Client |
| build.zig.zon | Removed dusty dependency |
| build.zig | Removed dusty and zio imports from all build targets |
| README.md | Removed zio-related documentation |
| QUICKSTART.md | Removed zio runtime initialization instructions |
| PROJECT_SUMMARY.md | Updated dependency information |
| IMPLEMENTATION_GUIDE.md | Updated HTTP client documentation |
| ARCHITECTURE.md | Updated architecture diagrams and dependency information |
| AGENTS.md | Updated architecture rules |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/docker_client.zig
Outdated
| var hdr_buf: [4096]u8 = undefined; | ||
| var fbs = std.io.fixedBufferStream(&hdr_buf); | ||
| const w = fbs.writer(); | ||
|
|
||
| try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path }); | ||
| try w.print("Host: localhost\r\n", .{}); | ||
| if (content_type) |ct| { | ||
| try w.print("Content-Type: {s}\r\n", .{ct}); | ||
| } | ||
| if (body) |b| { | ||
| try w.print("Content-Length: {d}\r\n", .{b.len}); | ||
| } | ||
| try w.print("Connection: close\r\n\r\n", .{}); | ||
|
|
||
| try stream.writeAll(fbs.getWritten()); | ||
| if (body) |b| { | ||
| try stream.writeAll(b); | ||
| } |
There was a problem hiding this comment.
Potential buffer overflow: If the formatted HTTP request header exceeds 4096 bytes, the fixed buffer will overflow. Consider checking fbs.pos against hdr_buf.len before each write operation, or use an ArrayList to allow dynamic growth. Large Content-Type values or very long paths could trigger this.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { | ||
| if (meta.content_length) |cl| { | ||
| if (cl == 0) return allocator.dupe(u8, ""); |
There was a problem hiding this comment.
Unnecessary allocation: When content_length is 0, calling allocator.dupe(u8, "") creates an allocation for an empty string. This allocates memory that must be freed by the caller. Consider using a compile-time constant empty string slice instead, or document that callers must always free the returned slice even when empty.
| fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { | |
| if (meta.content_length) |cl| { | |
| if (cl == 0) return allocator.dupe(u8, ""); | |
| /// For a zero-length body, returns a shared empty slice that must not be freed. | |
| fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { | |
| if (meta.content_length) |cl| { | |
| if (cl == 0) return &[_]u8{}; |
| fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { | ||
| if (meta.content_length) |cl| { | ||
| if (cl == 0) return allocator.dupe(u8, ""); | ||
| const body_buf = try allocator.alloc(u8, cl); | ||
| errdefer allocator.free(body_buf); | ||
| try reader.readExact(body_buf); | ||
| return body_buf; | ||
| } | ||
|
|
||
| if (meta.chunked) { | ||
| return readChunkedBody(reader, allocator); | ||
| } | ||
|
|
||
| // No Content-Length and not chunked — read until connection close. | ||
| return readUntilClose(reader, allocator); | ||
| } |
There was a problem hiding this comment.
HTTP spec violation: When both Content-Length and Transfer-Encoding: chunked headers are present, the code prioritizes Content-Length (line 186). According to RFC 7230 Section 3.3.3, Transfer-Encoding takes precedence and Content-Length must be ignored. This could cause incorrect parsing if a malicious or buggy server sends both headers. The condition should check for chunked encoding first.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
src/docker_client.zig
Outdated
| // imagePull returns a streaming JSON progress response. | ||
| // We use a streaming reader to drain it without buffering the whole body, | ||
| // which avoids hitting max_response_size for large image pulls. | ||
| var resp = try self.client.fetch(url, .{ | ||
| .method = .post, | ||
| .unix_socket_path = self.socket_path, | ||
| .decompress = false, | ||
| }); | ||
| defer resp.deinit(); | ||
|
|
||
| const sc: u16 = @as(u16, @intCast(@intFromEnum(resp.status()))); | ||
| if (sc != 200) return DockerClientError.ApiError; | ||
| // We open a dedicated connection and drain the stream to wait for | ||
| // completion without buffering the entire (potentially huge) body. | ||
| const stream = try std.net.connectUnixSocket(self.socket_path); | ||
| defer stream.close(); | ||
|
|
||
| try sendHttpRequest(stream, .post, api_path, null, null); | ||
|
|
||
| var reader: HttpReader = .{ .stream = stream }; | ||
| const meta = try parseResponseHead(&reader); | ||
| if (meta.status_code != 200) return DockerClientError.ApiError; | ||
|
|
||
| // Drain the progress stream to wait for completion. | ||
| const r = resp.reader(); | ||
| var buf: [4096]u8 = undefined; | ||
| while (true) { | ||
| const n = r.readSliceShort(&buf) catch break; | ||
| if (n == 0) break; | ||
| } | ||
| reader.drain(); |
There was a problem hiding this comment.
Error handling issue: imagePull drains the response stream without parsing it for errors. The Docker API returns 200 OK even for image pull failures, embedding error information in the JSON stream. Silently draining means pull failures (e.g., image not found, authentication required) will not be detected. Consider parsing at least the last JSON object from the stream or buffering the last few bytes to check for error messages.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
| /// Buffered reader over a raw `std.net.Stream` for incremental HTTP | ||
| /// response parsing. Avoids one-byte syscalls by reading into an | ||
| /// internal 8 KiB buffer. | ||
| const HttpReader = struct { | ||
| stream: std.net.Stream, | ||
| buf: [8192]u8 = undefined, | ||
| pos: usize = 0, | ||
| len: usize = 0, | ||
|
|
||
| /// Read a single byte from the buffered stream. | ||
| fn readByte(self: *HttpReader) !u8 { | ||
| if (self.pos >= self.len) { | ||
| self.len = try self.stream.read(&self.buf); | ||
| self.pos = 0; | ||
| if (self.len == 0) return error.EndOfStream; | ||
| } | ||
| const b = self.buf[self.pos]; | ||
| self.pos += 1; | ||
| return b; | ||
| } | ||
|
|
||
| /// Read exactly `dest.len` bytes from the buffered stream. | ||
| fn readExact(self: *HttpReader, dest: []u8) !void { | ||
| var written: usize = 0; | ||
| while (written < dest.len) { | ||
| if (self.pos >= self.len) { | ||
| self.len = try self.stream.read(&self.buf); | ||
| self.pos = 0; | ||
| if (self.len == 0) return error.EndOfStream; | ||
| } | ||
| const available = self.len - self.pos; | ||
| const needed = dest.len - written; | ||
| const to_copy = @min(available, needed); | ||
| @memcpy(dest[written .. written + to_copy], self.buf[self.pos .. self.pos + to_copy]); | ||
| written += to_copy; | ||
| self.pos += to_copy; | ||
| } | ||
| } | ||
|
|
||
| /// Read a line terminated by `\n`. Returns the line contents without | ||
| /// the trailing `\r\n`. Returns `null` when EOF is reached with no data. | ||
| fn readLine(self: *HttpReader, out: []u8) !?[]const u8 { | ||
| var i: usize = 0; | ||
| while (i < out.len) { | ||
| const b = self.readByte() catch |err| { | ||
| if (err == error.EndOfStream) { | ||
| return if (i == 0) null else out[0..i]; | ||
| } | ||
| return err; | ||
| }; | ||
| if (b == '\n') { | ||
| const end = if (i > 0 and out[i - 1] == '\r') i - 1 else i; | ||
| return out[0..end]; | ||
| } | ||
| out[i] = b; | ||
| i += 1; | ||
| } | ||
| return error.HttpHeaderTooLong; | ||
| } | ||
|
|
||
| /// Discard all remaining data until EOF. | ||
| fn drain(self: *HttpReader) void { | ||
| self.pos = self.len; | ||
| while (true) { | ||
| self.len = self.stream.read(&self.buf) catch return; | ||
| if (self.len == 0) return; | ||
| } | ||
| } | ||
| }; | ||
|
|
||
| /// Send an HTTP/1.1 request over a raw stream. | ||
| fn sendHttpRequest( | ||
| stream: std.net.Stream, | ||
| method: Method, | ||
| path: []const u8, | ||
| content_type: ?[]const u8, | ||
| body: ?[]const u8, | ||
| ) !void { | ||
| var hdr_buf: [4096]u8 = undefined; | ||
| var fbs = std.io.fixedBufferStream(&hdr_buf); | ||
| const w = fbs.writer(); | ||
|
|
||
| try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path }); | ||
| try w.print("Host: localhost\r\n", .{}); | ||
| if (content_type) |ct| { | ||
| try w.print("Content-Type: {s}\r\n", .{ct}); | ||
| } | ||
| if (body) |b| { | ||
| try w.print("Content-Length: {d}\r\n", .{b.len}); | ||
| } | ||
| try w.print("Connection: close\r\n\r\n", .{}); | ||
|
|
||
| try stream.writeAll(fbs.getWritten()); | ||
| if (body) |b| { | ||
| try stream.writeAll(b); | ||
| } | ||
| } | ||
|
|
||
| /// Parse the HTTP response status line and headers. | ||
| fn parseResponseHead(reader: *HttpReader) !ResponseMeta { | ||
| var line_buf: [8192]u8 = undefined; | ||
|
|
||
| // Status line: "HTTP/1.x NNN ..." | ||
| const status_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse; | ||
| const first_space = std.mem.indexOfScalar(u8, status_line, ' ') orelse return error.InvalidResponse; | ||
| const after_space = status_line[first_space + 1 ..]; | ||
| if (after_space.len < 3) return error.InvalidResponse; | ||
| const status_code = std.fmt.parseInt(u16, after_space[0..3], 10) catch return error.InvalidResponse; | ||
|
|
||
| var content_length: ?usize = null; | ||
| var chunked = false; | ||
|
|
||
| while (true) { | ||
| const header_line = try reader.readLine(&line_buf) orelse break; | ||
| if (header_line.len == 0) break; | ||
|
|
||
| const colon = std.mem.indexOfScalar(u8, header_line, ':') orelse continue; | ||
| const hdr_name = std.mem.trim(u8, header_line[0..colon], " "); | ||
| const hdr_value = std.mem.trim(u8, header_line[colon + 1 ..], " "); | ||
|
|
||
| if (std.ascii.eqlIgnoreCase(hdr_name, "content-length")) { | ||
| content_length = std.fmt.parseInt(usize, hdr_value, 10) catch null; | ||
| } else if (std.ascii.eqlIgnoreCase(hdr_name, "transfer-encoding")) { | ||
| if (std.ascii.eqlIgnoreCase(hdr_value, "chunked")) { | ||
| chunked = true; | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return .{ | ||
| .status_code = status_code, | ||
| .content_length = content_length, | ||
| .chunked = chunked, | ||
| }; | ||
| } | ||
|
|
||
| /// Read the full response body according to the parsed metadata. | ||
| fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { | ||
| if (meta.content_length) |cl| { | ||
| if (cl == 0) return allocator.dupe(u8, ""); | ||
| const body_buf = try allocator.alloc(u8, cl); | ||
| errdefer allocator.free(body_buf); | ||
| try reader.readExact(body_buf); | ||
| return body_buf; | ||
| } | ||
|
|
||
| if (meta.chunked) { | ||
| return readChunkedBody(reader, allocator); | ||
| } | ||
|
|
||
| // No Content-Length and not chunked — read until connection close. | ||
| return readUntilClose(reader, allocator); | ||
| } | ||
|
|
||
| /// Decode an HTTP chunked transfer-encoded body. | ||
| fn readChunkedBody(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 { | ||
| var body: std.ArrayList(u8) = .empty; | ||
| errdefer body.deinit(allocator); | ||
|
|
||
| var line_buf: [128]u8 = undefined; | ||
|
|
||
| while (true) { | ||
| const size_line = try reader.readLine(&line_buf) orelse break; | ||
| const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len; | ||
| const trimmed = std.mem.trim(u8, size_line[0..semi], " "); | ||
| const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break; | ||
|
|
||
| if (chunk_size == 0) { | ||
| // Drain optional trailers. | ||
| while (true) { | ||
| const trailer = try reader.readLine(&line_buf) orelse break; | ||
| if (trailer.len == 0) break; | ||
| } | ||
| break; | ||
| } | ||
|
|
||
| const old_len = body.items.len; | ||
| try body.ensureTotalCapacity(allocator, old_len + chunk_size); | ||
| body.items.len = old_len + chunk_size; | ||
| try reader.readExact(body.items[old_len..]); | ||
|
|
||
| // Consume trailing \r\n after chunk data. | ||
| _ = try reader.readLine(&line_buf); | ||
| } | ||
|
|
||
| return body.toOwnedSlice(allocator); | ||
| } | ||
|
|
||
| /// Read until the peer closes the connection. | ||
| fn readUntilClose(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 { | ||
| var body: std.ArrayList(u8) = .empty; | ||
| errdefer body.deinit(allocator); | ||
|
|
||
| // Flush anything the HttpReader already buffered. | ||
| if (reader.pos < reader.len) { | ||
| try body.appendSlice(allocator, reader.buf[reader.pos..reader.len]); | ||
| reader.pos = reader.len; | ||
| } | ||
|
|
||
| var tmp: [8192]u8 = undefined; | ||
| while (true) { | ||
| const n = reader.stream.read(&tmp) catch break; | ||
| if (n == 0) break; | ||
| try body.appendSlice(allocator, tmp[0..n]); | ||
| } | ||
|
|
||
| return body.toOwnedSlice(allocator); | ||
| } |
There was a problem hiding this comment.
Missing test coverage: The custom HTTP/1.1 implementation (HttpReader, sendHttpRequest, parseResponseHead, readChunkedBody, readUntilClose) lacks unit tests. These functions handle complex parsing logic with multiple edge cases (chunked encoding, connection close, header parsing) that should be tested independently. Consider adding unit tests that mock socket streams to verify correct handling of various HTTP responses, including malformed inputs, edge cases like empty bodies, large chunks, invalid chunk sizes, and header line limits.
There was a problem hiding this comment.
@copilot open a new pull request to apply changes based on this feedback
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaces external dusty HTTP client and zio runtime with a built-in HTTP/1.1 client over Docker Unix domain sockets; implements low-level request/response parsing and streaming, updates examples/tests/build/docs accordingly, and adds a macOS Colima Docker setup step to CI. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as "DockerClient (std)"
participant Socket as "Unix Socket\n(std.net)"
participant Daemon as "Docker Engine\n(daemon)"
Client->>Socket: connectUnixSocket(path)
Note right of Socket: stream established
Client->>Socket: sendHttpRequest(method, path, headers, body)
Socket->>Daemon: deliver HTTP/1.1 bytes
Daemon-->>Socket: respond HTTP/1.1 head + body (chunked/length/close)
Socket-->>Client: stream bytes to HttpReader
Client->>Client: parseResponseHead()\nreadResponseBody()/readChunkedBody()
Client-->>Client: return response bytes / metadata
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
src/wait.zig (1)
215-217: Hoiststd.http.Clientout of the polling loop to avoid per-iteration allocation
var clientis declared inside thewhilebody, so the entire client — including its connection pool — is allocated and torn down on every poll iteration. No connections are ever reused, and the allocator is hit once per retry.Move the client to the scope of
waitHttpso the pool is created once and connections are reused across retries:♻️ Proposed refactor
+ var client: std.http.Client = .{ .allocator = alloc }; + defer client.deinit(); + while (std.time.nanoTimestamp() < deadline) { - var client: std.http.Client = .{ .allocator = alloc }; - defer client.deinit(); - const result = client.fetch(.{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wait.zig` around lines 215 - 217, The std.http.Client instance is being allocated inside the polling loop in waitHttp (var client = .{ .allocator = alloc }; with defer client.deinit()), causing the connection pool to be recreated each iteration; move the client declaration and initialization out of the while loop into the outer scope of waitHttp so the pool is created once, reuse that client for all retries, and call client.deinit() (or otherwise clean it up) after the loop completes rather than deferring inside the loop; update references to the client variable in the loop body accordingly..github/workflows/ci.yml (1)
25-26:colima starthas no timeout; consider caching Homebrew packagesTwo operational concerns with the macOS Docker bootstrap step:
colima start --memory 4 --cpu 2has no explicit timeout. If the VM provisioning hangs (e.g. first-run download of the VM image), the step will block indefinitely and the job will only be cancelled when GitHub Actions hits its job-level timeout (6 h by default). Pass--timeout <minutes>to bound the failure window.
brew install docker docker-compose colimare-installs from scratch on every CI run. Each run on a fresh macOS runner pays the full download+install cost. Consider caching viaactions/cacheon the Homebrew cache directory (~/Library/Caches/Homebrew) to speed up subsequent runs.🛠️ Suggested hardening for the colima start line
- colima start --memory 4 --cpu 2 + colima start --memory 4 --cpu 2 --timeout 10🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 25 - 26, The workflow currently runs "colima start --memory 4 --cpu 2" with no timeout and always runs "brew install docker docker-compose colima"; update the CI step to add an explicit timeout flag to the colima invocation (e.g. "--timeout <minutes>") so provisioning cannot hang indefinitely, and add an actions/cache entry for Homebrew cache (~/Library/Caches/Homebrew) so the "brew install docker docker-compose colima" step uses cached bottles between runs to avoid re-downloading and speed CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/docker_client.zig`:
- Around line 184-199: The function readResponseBody currently prefers
meta.content_length over meta.chunked which violates HTTP/1.1; update
readResponseBody so that if meta.chunked is true it calls readChunkedBody
immediately (ignoring content_length), otherwise fall back to content_length
handling or readUntilClose; adjust logic around meta.content_length,
meta.chunked, and calls to readChunkedBody/readUntilClose in the
readResponseBody function to ensure chunked decoding wins.
- Around line 1189-1216: The tests for sendHttpRequest read the pipe only once
which can return a partial read; update both tests ("sendHttpRequest: writes a
valid GET request" and "sendHttpRequest: writes Content-Type and Content-Length
for POST with body") to read from the read-end until EOF instead of a single
std.posix.read call: add a small helper (e.g., readAllFromFd or readUntilEof)
that repeatedly reads into a buffer/appends into a slice until read returns 0,
then use that helper to produce the complete req slice checked by the
assertions; reference the existing tests/sendHttpRequest usage to locate where
to replace the single-read logic.
- Around line 209-214: readChunkedBody currently treats parse failures as a
normal loop break and returns partial data; instead detect invalid chunk-size
lines and return an explicit error so callers can detect protocol corruption. In
the loop that uses reader.readLine(&line_buf) to produce size_line and computes
semi, trimmed and chunk_size (the std.fmt.parseInt call), replace the "catch
break" behavior with returning a descriptive error (e.g., error.InvalidChunkSize
or a new error variant) that includes context (the offending size_line or
trimmed slice); likewise if std.mem.indexOfScalar or trimming yields unexpected
results, return an error rather than silently breaking so the caller of
readChunkedBody can fail-fast on malformed chunked bodies.
In `@src/wait.zig`:
- Line 221: The current assignment .method = if (std.mem.eql(u8, s.method,
"POST")) .POST else .GET silently treats any non-"POST" HttpStrategy.method
string as GET; update the handling in src/wait.zig by either (A) validating
s.method against known verbs (e.g., "GET","POST","PUT","DELETE","PATCH","HEAD",
etc.) and returning an error for unrecognized values instead of defaulting to
.GET, or (B) change HttpStrategy.method to type std.http.Method and adjust
callers so you can assign the enum directly (removing the string comparisons);
locate references to HttpStrategy.method and the expression using
std.mem.eql(u8, s.method, "POST") to implement the chosen fix and ensure callers
are updated to propagate errors or use std.http.Method.
---
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 25-26: The workflow currently runs "colima start --memory 4 --cpu
2" with no timeout and always runs "brew install docker docker-compose colima";
update the CI step to add an explicit timeout flag to the colima invocation
(e.g. "--timeout <minutes>") so provisioning cannot hang indefinitely, and add
an actions/cache entry for Homebrew cache (~/Library/Caches/Homebrew) so the
"brew install docker docker-compose colima" step uses cached bottles between
runs to avoid re-downloading and speed CI.
In `@src/wait.zig`:
- Around line 215-217: The std.http.Client instance is being allocated inside
the polling loop in waitHttp (var client = .{ .allocator = alloc }; with defer
client.deinit()), causing the connection pool to be recreated each iteration;
move the client declaration and initialization out of the while loop into the
outer scope of waitHttp so the pool is created once, reuse that client for all
retries, and call client.deinit() (or otherwise clean it up) after the loop
completes rather than deferring inside the loop; update references to the client
variable in the loop body accordingly.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/workflows/ci.ymlAGENTS.mdARCHITECTURE.mdIMPLEMENTATION_GUIDE.mdPROJECT_SUMMARY.mdQUICKSTART.mdREADME.mdbuild.zigbuild.zig.zonexamples/basic.zigsrc/docker_client.zigsrc/integration_test.zigsrc/root.zigsrc/wait.zig
💤 Files with no reviewable changes (4)
- build.zig
- QUICKSTART.md
- README.md
- src/integration_test.zig
| test "sendHttpRequest: writes a valid GET request" { | ||
| const fds = try std.posix.pipe(); | ||
| defer std.posix.close(fds[0]); | ||
| const write_stream = std.net.Stream{ .handle = fds[1] }; | ||
| try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null); | ||
| std.posix.close(fds[1]); | ||
| var buf: [4096]u8 = undefined; | ||
| const n = try std.posix.read(fds[0], &buf); | ||
| const req = buf[0..n]; | ||
| try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n")); | ||
| try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null); | ||
| try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null); | ||
| } | ||
|
|
||
| test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" { | ||
| const fds = try std.posix.pipe(); | ||
| defer std.posix.close(fds[0]); | ||
| const write_stream = std.net.Stream{ .handle = fds[1] }; | ||
| const body = "{\"Image\":\"alpine\"}"; | ||
| try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body); | ||
| std.posix.close(fds[1]); | ||
| var buf: [4096]u8 = undefined; | ||
| const n = try std.posix.read(fds[0], &buf); | ||
| const req = buf[0..n]; | ||
| try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n")); | ||
| try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null); | ||
| try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null); | ||
| try std.testing.expect(std.mem.endsWith(u8, req, body)); |
There was a problem hiding this comment.
CI failure: pipe reads can be partial—read until EOF in sendHttpRequest tests.
The failing test reads the pipe once; a short read can miss bytes and cause assertions to inspect incomplete data. This aligns with the CI failure reported for docker_client.test.sendHttpRequest. Reading until EOF will make the test deterministic.
🧪 Suggested fix (add helper + use it in both sendHttpRequest tests)
+fn readAllFromFd(fd: std.posix.fd_t, allocator: std.mem.Allocator) ![]u8 {
+ var out: std.ArrayList(u8) = .empty;
+ errdefer out.deinit(allocator);
+ var buf: [4096]u8 = undefined;
+ while (true) {
+ const n = try std.posix.read(fd, &buf);
+ if (n == 0) break;
+ try out.appendSlice(allocator, buf[0..n]);
+ }
+ return out.toOwnedSlice(allocator);
+}
+
test "sendHttpRequest: writes a valid GET request" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null);
std.posix.close(fds[1]);
- var buf: [4096]u8 = undefined;
- const n = try std.posix.read(fds[0], &buf);
- const req = buf[0..n];
+ const req = try readAllFromFd(fds[0], std.testing.allocator);
+ defer std.testing.allocator.free(req);
try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null);
}
test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
const body = "{\"Image\":\"alpine\"}";
try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body);
std.posix.close(fds[1]);
- var buf: [4096]u8 = undefined;
- const n = try std.posix.read(fds[0], &buf);
- const req = buf[0..n];
+ const req = try readAllFromFd(fds[0], std.testing.allocator);
+ defer std.testing.allocator.free(req);
try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null);
try std.testing.expect(std.mem.endsWith(u8, req, body));
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/docker_client.zig` around lines 1189 - 1216, The tests for
sendHttpRequest read the pipe only once which can return a partial read; update
both tests ("sendHttpRequest: writes a valid GET request" and "sendHttpRequest:
writes Content-Type and Content-Length for POST with body") to read from the
read-end until EOF instead of a single std.posix.read call: add a small helper
(e.g., readAllFromFd or readUntilEof) that repeatedly reads into a
buffer/appends into a slice until read returns 0, then use that helper to
produce the complete req slice checked by the assertions; reference the existing
tests/sendHttpRequest usage to locate where to replace the single-read logic.
Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/wait.zig (1)
221-221:⚠️ Potential issue | 🟠 MajorNon-
POSTmethods are still silently coerced toGET.Line 221 keeps the same behavior: any method other than
"POST"is sent asGET, which breaks valid values like"HEAD"/"PUT"/"DELETE".🐛 Proposed fix
fn waitHttp(s: HttpStrategy, target: StrategyTarget, alloc: std.mem.Allocator) !void { @@ const scheme = if (s.use_tls) "https" else "http"; const url = try std.fmt.allocPrint(alloc, "{s}://{s}:{d}{s}", .{ scheme, host, mapped_port, s.path }); defer alloc.free(url); + const method = std.meta.stringToEnum(std.http.Method, s.method) orelse + return error.InvalidHttpMethod; @@ const result = client.fetch(.{ .location = .{ .url = url }, - .method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET, + .method = method, }) catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wait.zig` at line 221, The current assignment for .method coerces any non-"POST" value to GET, losing valid methods like HEAD/PUT/DELETE; update the mapping logic that assigns .method (using the s.method string) to explicitly handle all expected HTTP verbs (e.g., "GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or convert the input string into the library's HTTP method enum instead of defaulting to GET; locate the code that reads s.method and change the conditional to a proper switch/lookup that preserves the provided method (or uses a generic/custom enum variant) so non-POST methods are not silently turned into GET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/wait.zig`:
- Line 221: The current assignment for .method coerces any non-"POST" value to
GET, losing valid methods like HEAD/PUT/DELETE; update the mapping logic that
assigns .method (using the s.method string) to explicitly handle all expected
HTTP verbs (e.g., "GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or
convert the input string into the library's HTTP method enum instead of
defaulting to GET; locate the code that reads s.method and change the
conditional to a proper switch/lookup that preserves the provided method (or
uses a generic/custom enum variant) so non-POST methods are not silently turned
into GET.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
.github/workflows/ci.ymlAGENTS.mdARCHITECTURE.mdIMPLEMENTATION_GUIDE.mdPROJECT_SUMMARY.mdQUICKSTART.mdREADME.mdbuild.zigbuild.zig.zonexamples/basic.zigsrc/docker_client.zigsrc/integration_test.zigsrc/root.zigsrc/wait.zig
💤 Files with no reviewable changes (4)
- src/integration_test.zig
- QUICKSTART.md
- README.md
- build.zig
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
src/docker_client.zig (3)
185-196:⚠️ Potential issue | 🟠 MajorPrioritize chunked decoding over
Content-Length.
readResponseBodystill checkscontent_lengthfirst. If both headers are present, chunked must win.Suggested fix
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 { + if (meta.chunked) { + return readChunkedBody(reader, allocator); + } if (meta.content_length) |cl| { if (cl == 0) return allocator.dupe(u8, ""); const body_buf = try allocator.alloc(u8, cl); errdefer allocator.free(body_buf); try reader.readExact(body_buf); return body_buf; } - - if (meta.chunked) { - return readChunkedBody(reader, allocator); - } // No Content-Length and not chunked — read until connection close. return readUntilClose(reader, allocator); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/docker_client.zig` around lines 185 - 196, The function readResponseBody currently checks meta.content_length before meta.chunked; change the logic to prefer chunked encoding when meta.chunked is true even if content_length is present by checking meta.chunked first (call readChunkedBody when meta.chunked) and only falling back to the content_length branch afterwards; update the control flow in readResponseBody to call readChunkedBody when meta.chunked and retain the existing content_length handling (including allocator behavior) as the fallback.
1189-1216:⚠️ Potential issue | 🟡 Minor
sendHttpRequesttests still use single-shot pipe reads (flaky).Lines [1196] and [1211] read only once from the pipe; short reads can truncate
reqand make assertions nondeterministic.#!/bin/bash set -euo pipefail # Verify both sendHttpRequest tests still use one-shot pipe reads. rg -n -C3 'test "sendHttpRequest:' src/docker_client.zig rg -n -C2 'const n = try std.posix.read\(fds\[0\], &buf\);' src/docker_client.zig🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/docker_client.zig` around lines 1189 - 1216, The tests for sendHttpRequest use a single one-shot std.posix.read into buf (variables buf, n, req) which can produce short reads and flaky assertions; change both tests to repeatedly read in a loop until read returns 0 (EOF) or an error, appending each chunk into a dynamic buffer (e.g., std.ArrayList(u8) or a growable slice) and only then form req from the accumulated bytes before making assertions; update the read logic in the tests that call sendHttpRequest (the blocks that currently do "const n = try std.posix.read(fds[0], &buf);" and then "const req = buf[0..n];") to this looped-read-and-accumulate approach so the assertions always see the full request.
210-214:⚠️ Potential issue | 🟠 MajorDo not treat invalid chunk-size lines as normal EOF.
On malformed chunk metadata, the function currently
breaks and returns partial data. This should fail fast with an explicit protocol error.Suggested fix
while (true) { - const size_line = try reader.readLine(&line_buf) orelse break; + const size_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse; const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len; const trimmed = std.mem.trim(u8, size_line[0..semi], " "); - const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break; + const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch return error.InvalidResponse;(Downstream: the test around Lines [1146-1154] should then assert an error instead of partial success.)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/docker_client.zig` around lines 210 - 214, The code currently treats malformed or prematurely-terminated chunk-size lines as normal EOF by using "orelse break" and "catch break", returning partial data; change this to fail fast with an explicit protocol error: when reader.readLine(&line_buf) yields none, when std.mem.indexOfScalar/trim results are invalid, or when std.fmt.parseInt(usize, trimmed, 16) fails, return a specific error (e.g., ProtocolError or UnexpectedChunkSize) instead of breaking; update the logic around size_line, semi, trimmed, and chunk_size to propagate that error so callers see a clear protocol failure rather than partial success.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/docker_client.zig`:
- Around line 109-114: The drain function currently swallows all read errors
(using "catch return") so imagePull treats a broken stream as success; change
HttpReader.drain to return an error outcome (e.g., error!void or any suitable
Error type) and propagate read errors instead of suppressing them, update the
read loop in drain (remove "catch return" and return the caught error) and
adjust callers (notably imagePull and the other drain occurrence around the
later block) to handle/propagate the error so transport failures surface to the
caller.
In `@src/root.zig`:
- Around line 87-92: The code currently accepts any DOCKER_HOST value and
assigns it to socket, which breaks connectUnixSocket when DOCKER_HOST uses
non-unix schemes; update the getenv handling (std.posix.getenv and
std.mem.startsWith usage) to explicitly allow only values that start with
"unix://" and strip that prefix into socket, and otherwise return or log a clear
error and abort (reject schemes like "tcp://" or "ssh://"); ensure the variable
socket is only set for a valid filesystem path and add a guard before calling
connectUnixSocket to avoid passing URIs.
---
Duplicate comments:
In `@src/docker_client.zig`:
- Around line 185-196: The function readResponseBody currently checks
meta.content_length before meta.chunked; change the logic to prefer chunked
encoding when meta.chunked is true even if content_length is present by checking
meta.chunked first (call readChunkedBody when meta.chunked) and only falling
back to the content_length branch afterwards; update the control flow in
readResponseBody to call readChunkedBody when meta.chunked and retain the
existing content_length handling (including allocator behavior) as the fallback.
- Around line 1189-1216: The tests for sendHttpRequest use a single one-shot
std.posix.read into buf (variables buf, n, req) which can produce short reads
and flaky assertions; change both tests to repeatedly read in a loop until read
returns 0 (EOF) or an error, appending each chunk into a dynamic buffer (e.g.,
std.ArrayList(u8) or a growable slice) and only then form req from the
accumulated bytes before making assertions; update the read logic in the tests
that call sendHttpRequest (the blocks that currently do "const n = try
std.posix.read(fds[0], &buf);" and then "const req = buf[0..n];") to this
looped-read-and-accumulate approach so the assertions always see the full
request.
- Around line 210-214: The code currently treats malformed or
prematurely-terminated chunk-size lines as normal EOF by using "orelse break"
and "catch break", returning partial data; change this to fail fast with an
explicit protocol error: when reader.readLine(&line_buf) yields none, when
std.mem.indexOfScalar/trim results are invalid, or when std.fmt.parseInt(usize,
trimmed, 16) fails, return a specific error (e.g., ProtocolError or
UnexpectedChunkSize) instead of breaking; update the logic around size_line,
semi, trimmed, and chunk_size to propagate that error so callers see a clear
protocol failure rather than partial success.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8ad204a3-9140-4435-bafa-fc6fc8871d12
📒 Files selected for processing (2)
src/docker_client.zigsrc/root.zig
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…7230 §3.3.3 Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Parse Docker pull stream for embedded errors in imagePull
Replace fixed header buffer with dynamic ArrayList in sendHttpRequest
Fix RFC 7230 §3.3.3 violation: Transfer-Encoding takes precedence over Content-Length
|
@copilot remove line 206 from docker_client.zig, line that fails the build |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (2)
src/docker_client.zig (1)
1238-1255:⚠️ Potential issue | 🟠 MajorPipe reads in
sendHttpRequesttests are still single-shot and flaky.A single
std.posix.readmay return partial request bytes, so these assertions can intermittently inspect truncated data. This was already flagged earlier and remains unresolved.🔧 Proposed fix
+fn readAllFromFd(fd: std.posix.fd_t, allocator: std.mem.Allocator) ![]u8 { + var out: std.ArrayList(u8) = .empty; + errdefer out.deinit(allocator); + var buf: [4096]u8 = undefined; + while (true) { + const n = try std.posix.read(fd, &buf); + if (n == 0) break; + try out.appendSlice(allocator, buf[0..n]); + } + return out.toOwnedSlice(allocator); +} @@ - var buf: [4096]u8 = undefined; - const n = try std.posix.read(fds[0], &buf); - const req = buf[0..n]; + const req = try readAllFromFd(fds[0], std.testing.allocator); + defer std.testing.allocator.free(req); @@ - var buf: [4096]u8 = undefined; - const n = try std.posix.read(fds[0], &buf); - const req = buf[0..n]; + const req = try readAllFromFd(fds[0], std.testing.allocator); + defer std.testing.allocator.free(req);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/docker_client.zig` around lines 1238 - 1255, The tests for sendHttpRequest use a single std.posix.read which can return partial data; update the test logic in both tests (the "sendHttpRequest: writes Content-Type and Content-Length for POST with body" and the earlier GET/_ping test) to repeatedly read from fds[0] into a growing buffer until read returns 0 (EOF) or an error, accumulating bytes before computing req and running std.testing.expect checks; ensure the write end (fds[1]) is closed before finishing reads so the loop will see EOF, and reference sendHttpRequest, the test names, and the fds/posix.read usage when making the change.src/wait.zig (1)
221-225:⚠️ Potential issue | 🟡 MinorUnknown/unsupported HTTP methods still silently fall back to GET.
This still converts invalid method strings into a different verb instead of failing fast, which can hide configuration mistakes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wait.zig` around lines 221 - 225, The current mapping of s.method to the HTTP verb silently falls back to .GET when the method string is unrecognized; change the mapping so unknown methods do not default but instead return or propagate an explicit error (e.g., error.InvalidMethod) from the function that builds the request. Concretely, replace the final "else .GET" in the .method assignment for s.method with an "else return error.InvalidMethod" (or set to a sentinel like .INVALID and immediately return error.InvalidMethod) and update the caller to handle/propagate that error so invalid method strings fail fast instead of becoming GET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/docker_client.zig`:
- Around line 386-401: The image pull code currently calls readResponseBody and
stores pull_body, buffering the entire progress stream; instead, modify
imagePull to consume the HttpReader stream incrementally after parseResponseHead
(i.e., after sendHttpRequest, parseResponseHead, and meta handling) by reading
from the reader/stream in chunks or lines (using HttpReader/stream APIs) and
scanning each newline-delimited JSON object for an "error" field, freeing
per-line allocations and returning immediately on the first found error; remove
the readResponseBody allocation/pull_body usage and ensure stream.close/defer
semantics remain intact so large pulls don't allocate the whole body.
- Around line 130-132: In sendHttpRequest, replace the outdated ArrayList init
with the Zig 0.15.2 pattern: declare hdr_buf as std.ArrayList(u8).empty and
ensure you call hdr_buf.deinit(allocator) and use hdr_buf.writer(allocator)
(matching other instances at lines using .empty); update the hdr_buf variable,
its deinit call, and the writer() invocation accordingly so the ArrayList
follows the .empty + deinit(allocator)/writer(allocator) API (symbols: hdr_buf,
sendHttpRequest, hdr_buf.deinit, hdr_buf.writer).
---
Duplicate comments:
In `@src/docker_client.zig`:
- Around line 1238-1255: The tests for sendHttpRequest use a single
std.posix.read which can return partial data; update the test logic in both
tests (the "sendHttpRequest: writes Content-Type and Content-Length for POST
with body" and the earlier GET/_ping test) to repeatedly read from fds[0] into a
growing buffer until read returns 0 (EOF) or an error, accumulating bytes before
computing req and running std.testing.expect checks; ensure the write end
(fds[1]) is closed before finishing reads so the loop will see EOF, and
reference sendHttpRequest, the test names, and the fds/posix.read usage when
making the change.
In `@src/wait.zig`:
- Around line 221-225: The current mapping of s.method to the HTTP verb silently
falls back to .GET when the method string is unrecognized; change the mapping so
unknown methods do not default but instead return or propagate an explicit error
(e.g., error.InvalidMethod) from the function that builds the request.
Concretely, replace the final "else .GET" in the .method assignment for s.method
with an "else return error.InvalidMethod" (or set to a sentinel like .INVALID
and immediately return error.InvalidMethod) and update the caller to
handle/propagate that error so invalid method strings fail fast instead of
becoming GET.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c7d9f3f-d285-4661-a7a9-202f70a9a9f8
📒 Files selected for processing (3)
examples/basic.zigsrc/docker_client.zigsrc/wait.zig
src/docker_client.zig
Outdated
| var hdr_buf = std.ArrayList(u8).init(allocator); | ||
| defer hdr_buf.deinit(); | ||
| const w = hdr_buf.writer(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the specific file and lines in question
cd /tmp && find . -name "src" -type d 2>/dev/null | head -5Repository: dragosv/testcontainers-zig
Length of output: 52
🏁 Script executed:
# Check if we're in the right repo structure
ls -la | head -20Repository: dragosv/testcontainers-zig
Length of output: 1186
🏁 Script executed:
# Look for the docker_client.zig file
find . -name "docker_client.zig" -type f 2>/dev/nullRepository: dragosv/testcontainers-zig
Length of output: 93
🏁 Script executed:
# Let's examine the actual file content around lines 130-132
cat -n src/docker_client.zig | sed -n '120,145p'Repository: dragosv/testcontainers-zig
Length of output: 991
🏁 Script executed:
# Check the broader context of sendHttpRequest function
grep -n "fn sendHttpRequest" src/docker_client.zigRepository: dragosv/testcontainers-zig
Length of output: 93
🏁 Script executed:
# Look for other ArrayList usage patterns in the file
rg "ArrayList" src/docker_client.zig -n -C 2Repository: dragosv/testcontainers-zig
Length of output: 1160
🏁 Script executed:
# Check for .zig.zon or build.zig to understand Zig version
cat build.zig.zon 2>/dev/null || cat build.zig 2>/dev/null | head -30Repository: dragosv/testcontainers-zig
Length of output: 312
🏁 Script executed:
# Check how writer() is called on ArrayList instances
rg "\.writer\(" src/docker_client.zig -n -B 2 -A 1Repository: dragosv/testcontainers-zig
Length of output: 199
🏁 Script executed:
# Look at the full usage of hdr_buf in sendHttpRequest
cat -n src/docker_client.zig | sed -n '130,150p'Repository: dragosv/testcontainers-zig
Length of output: 820
Update ArrayList initialization in sendHttpRequest to match Zig 0.15.2 API used elsewhere in the file.
The current code at lines 130-132 uses an outdated initialization pattern. All other ArrayList instances in the file (lines 210, 245, 547, 974) use the .empty pattern with allocator passed to deinit() and writer(). This inconsistency breaks compilation with Zig 0.15.2.
🔧 Proposed fix
- var hdr_buf = std.ArrayList(u8).init(allocator);
- defer hdr_buf.deinit();
- const w = hdr_buf.writer();
+ var hdr_buf: std.ArrayList(u8) = .empty;
+ defer hdr_buf.deinit(allocator);
+ var w = hdr_buf.writer(allocator);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var hdr_buf = std.ArrayList(u8).init(allocator); | |
| defer hdr_buf.deinit(); | |
| const w = hdr_buf.writer(); | |
| var hdr_buf: std.ArrayList(u8) = .empty; | |
| defer hdr_buf.deinit(allocator); | |
| var w = hdr_buf.writer(allocator); |
🧰 Tools
🪛 GitHub Actions: CI/CD
[error] 130-130: Zig compile error: struct 'array_list.Aligned(u8,null)' has no member named 'init'. This occurs during the 'zig build' step.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/docker_client.zig` around lines 130 - 132, In sendHttpRequest, replace
the outdated ArrayList init with the Zig 0.15.2 pattern: declare hdr_buf as
std.ArrayList(u8).empty and ensure you call hdr_buf.deinit(allocator) and use
hdr_buf.writer(allocator) (matching other instances at lines using .empty);
update the hdr_buf variable, its deinit call, and the writer() invocation
accordingly so the ArrayList follows the .empty +
deinit(allocator)/writer(allocator) API (symbols: hdr_buf, sendHttpRequest,
hdr_buf.deinit, hdr_buf.writer).
src/docker_client.zig
Outdated
| // We open a dedicated connection and drain the stream to wait for | ||
| // completion without buffering the entire (potentially huge) body. | ||
| const stream = try std.net.connectUnixSocket(self.socket_path); | ||
| defer stream.close(); | ||
|
|
||
| try sendHttpRequest(self.allocator, stream, .post, api_path, null, null); | ||
|
|
||
| var reader: HttpReader = .{ .stream = stream }; | ||
| const meta = try parseResponseHead(&reader); | ||
| if (meta.status_code != 200) return DockerClientError.ApiError; | ||
|
|
||
| // Docker returns 200 OK even for pull failures; errors are embedded as | ||
| // {"error":"..."} JSON objects in the newline-delimited progress stream. | ||
| // Read the full stream and scan each line for an error field. | ||
| const pull_body = try readResponseBody(&reader, meta, self.allocator); | ||
| defer self.allocator.free(pull_body); |
There was a problem hiding this comment.
imagePull still buffers the entire progress stream despite comment saying otherwise.
This path allocates the full pull stream (readResponseBody) before scanning for errors, which can become very large and cause avoidable memory pressure on big image pulls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/docker_client.zig` around lines 386 - 401, The image pull code currently
calls readResponseBody and stores pull_body, buffering the entire progress
stream; instead, modify imagePull to consume the HttpReader stream incrementally
after parseResponseHead (i.e., after sendHttpRequest, parseResponseHead, and
meta handling) by reading from the reader/stream in chunks or lines (using
HttpReader/stream APIs) and scanning each newline-delimited JSON object for an
"error" field, freeing per-line allocations and returning immediately on the
first found error; remove the readResponseBody allocation/pull_body usage and
ensure stream.close/defer semantics remain intact so large pulls don't allocate
the whole body.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Remove
dusty/ziodependencies — use built-in HTTP/1.1 client over Unix socketSummary
This PR replaces the external
dustyHTTP client (and its transitivezioasync I/O runtime dependency) with a zero-dependency, built-in HTTP/1.1 client that communicates directly with the Docker Engine over its Unix domain socket viastd.net.connectUnixSocket.Callers no longer need to initialise a
zio.Runtimebefore using the library. The public API surface is simpler and the project has no external dependencies — only the Zig standard library.Motivation
dustyandziowere the only third-party packages. Removing them makes the library fully self-contained and easier to vendor or integrate.zio.Runtime; that boilerplate is now gone.ImagePullFailedexplicitly.What changed
dusty.Clientwith a hand-rolled HTTP/1.1 request writer + response parser overstd.net.connectUnixSocketreadChunkedBody,readResponseBody, andreadUntilClosefollowing RFC 7230 §3.3.3 (chunked takes precedence over Content-Length)checkPullStreamErrorsto scan newline-delimited JSON progress streams for{"error":"..."}objectsdusty.Clienttostd.http.Client(stdlib) for health-check pollingDOCKER_HOSTsupportDockerProvider.initnow readsDOCKER_HOSTand extracts the socket path fromunix://URIsdustyandzioimports frombuild.zig; cleared.dependenciesinbuild.zig.zoninitRuntimeOrSkip/zio.Runtimeboilerplate from 20+ test functionsHttpReader,parseResponseHead,readChunkedBody,readUntilClose,sendHttpRequest, andcheckPullStreamErrorsusing pipe-backed mock streamsARCHITECTURE.md,PROJECT_SUMMARY.md,IMPLEMENTATION_GUIDE.md,QUICKSTART.md,README.md, andAGENTS.mdto reflect the removal of external dependenciesKey new internal components
All located in
src/docker_client.zig:HttpReader— buffered byte reader overstd.net.StreamwithreadByte,readExact, andreadLineResponseMeta— parsed status code, Content-Length, and chunked flag from response headersparseResponseHead— parses the HTTP/1.1 status line and headerssendHttpRequest— writes a well-formed HTTP/1.1 request (with optional body + Content-Type) to a streamcheckPullStreamErrors— detects Docker pull failures embedded in streaming JSON responsesTesting
zig build test --summary all— all new unit tests pass (no Docker required)zig build integration-test --summary all— full integration suite passes against a local Docker daemonSummary by CodeRabbit
Documentation
Refactor
Tests
Chores