Skip to content

Using unix sockets#1

Merged
dragosv merged 29 commits intomainfrom
unixsocket
Mar 7, 2026
Merged

Using unix sockets#1
dragosv merged 29 commits intomainfrom
unixsocket

Conversation

@dragosv
Copy link
Owner

@dragosv dragosv commented Feb 23, 2026

Remove dusty/zio dependencies — use built-in HTTP/1.1 client over Unix socket

Summary

This PR replaces the external dusty HTTP client (and its transitive zio async I/O runtime dependency) with a zero-dependency, built-in HTTP/1.1 client that communicates directly with the Docker Engine over its Unix domain socket via std.net.connectUnixSocket.

Callers no longer need to initialise a zio.Runtime before using the library. The public API surface is simpler and the project has no external dependencies — only the Zig standard library.

Motivation

  • Eliminate external dependenciesdusty and zio were the only third-party packages. Removing them makes the library fully self-contained and easier to vendor or integrate.
  • Simplify caller setup — every user previously had to create and manage a zio.Runtime; that boilerplate is now gone.
  • Better error detection for image pulls — Docker returns HTTP 200 even when a pull fails, embedding errors as JSON objects in the streaming body. The new client parses this stream and surfaces ImagePullFailed explicitly.

What changed

Area Change
Transport layer Replaced dusty.Client with a hand-rolled HTTP/1.1 request writer + response parser over std.net.connectUnixSocket
Chunked transfer decoding Added readChunkedBody, readResponseBody, and readUntilClose following RFC 7230 §3.3.3 (chunked takes precedence over Content-Length)
Image pull error detection Added checkPullStreamErrors to scan newline-delimited JSON progress streams for {"error":"..."} objects
HTTP wait strategy Switched from dusty.Client to std.http.Client (stdlib) for health-check polling
DOCKER_HOST support DockerProvider.init now reads DOCKER_HOST and extracts the socket path from unix:// URIs
Build system Removed dusty and zio imports from build.zig; cleared .dependencies in build.zig.zon
Integration tests Removed all initRuntimeOrSkip / zio.Runtime boilerplate from 20+ test functions
Unit tests Added ~30 new unit tests for HttpReader, parseResponseHead, readChunkedBody, readUntilClose, sendHttpRequest, and checkPullStreamErrors using pipe-backed mock streams
Documentation Updated ARCHITECTURE.md, PROJECT_SUMMARY.md, IMPLEMENTATION_GUIDE.md, QUICKSTART.md, README.md, and AGENTS.md to reflect the removal of external dependencies

Key new internal components

All located in src/docker_client.zig:

  • HttpReader — buffered byte reader over std.net.Stream with readByte, readExact, and readLine
  • ResponseMeta — parsed status code, Content-Length, and chunked flag from response headers
  • parseResponseHead — parses the HTTP/1.1 status line and headers
  • sendHttpRequest — writes a well-formed HTTP/1.1 request (with optional body + Content-Type) to a stream
  • checkPullStreamErrors — detects Docker pull failures embedded in streaming JSON responses

Testing

  • zig build test --summary all — all new unit tests pass (no Docker required)
  • zig build integration-test --summary all — full integration suite passes against a local Docker daemon
  • CI workflow updated for both Linux and macOS

Summary by CodeRabbit

  • Documentation

    • Updated docs, guides, README and examples to describe a built-in HTTP/1.1 client over Unix domain sockets and simplified quick-start snippets.
  • Refactor

    • Removed external HTTP and async runtime dependencies; core networking and I/O now use the standard library and a socket-based HTTP flow (blocking behavior).
  • Tests

    • Simplified integration tests by removing external runtime setup; tests now rely on Docker availability checks.
  • Chores

    • Adjusted CI macOS handling (added macOS Docker install/start step; matrix now targets ubuntu), and removed external dependency entries from build configuration.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request replaces the external dusty HTTP library with a custom HTTP/1.1 client implementation that communicates directly with the Docker Engine over Unix domain sockets using std.net.connectUnixSocket. This eliminates both the dusty and zio dependencies.

Changes:

  • Implemented a custom HTTP/1.1 client in docker_client.zig with buffered stream reading, header parsing, and chunked transfer encoding support
  • Removed all zio.Runtime initialization code from tests, examples, and documentation
  • Updated wait.zig to use std.http.Client for the HTTP wait strategy instead of dusty
  • Removed external dependencies from build.zig.zon and build.zig

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
src/docker_client.zig Added custom HTTP/1.1 implementation with HttpReader, request builder, response parser, and chunked encoding decoder; removed dusty client dependency
src/wait.zig Replaced dusty client with std.http.Client for HTTP wait strategy
src/integration_test.zig Removed zio runtime initialization from all tests
src/root.zig Updated documentation to reflect removal of zio dependency requirement
examples/basic.zig Removed zio runtime initialization; updated to use std.http.Client
build.zig.zon Removed dusty dependency
build.zig Removed dusty and zio imports from all build targets
README.md Removed zio-related documentation
QUICKSTART.md Removed zio runtime initialization instructions
PROJECT_SUMMARY.md Updated dependency information
IMPLEMENTATION_GUIDE.md Updated HTTP client documentation
ARCHITECTURE.md Updated architecture diagrams and dependency information
AGENTS.md Updated architecture rules

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +126 to +143
var hdr_buf: [4096]u8 = undefined;
var fbs = std.io.fixedBufferStream(&hdr_buf);
const w = fbs.writer();

try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path });
try w.print("Host: localhost\r\n", .{});
if (content_type) |ct| {
try w.print("Content-Type: {s}\r\n", .{ct});
}
if (body) |b| {
try w.print("Content-Length: {d}\r\n", .{b.len});
}
try w.print("Connection: close\r\n\r\n", .{});

try stream.writeAll(fbs.getWritten());
if (body) |b| {
try stream.writeAll(b);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential buffer overflow: If the formatted HTTP request header exceeds 4096 bytes, the fixed buffer will overflow. Consider checking fbs.pos against hdr_buf.len before each write operation, or use an ArrayList to allow dynamic growth. Large Content-Type values or very long paths could trigger this.

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +185 to +187
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary allocation: When content_length is 0, calling allocator.dupe(u8, "") creates an allocation for an empty string. This allocates memory that must be freed by the caller. Consider using a compile-time constant empty string slice instead, or document that callers must always free the returned slice even when empty.

Suggested change
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
/// For a zero-length body, returns a shared empty slice that must not be freed.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return &[_]u8{};

Copilot uses AI. Check for mistakes.
Comment on lines +185 to +200
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}

if (meta.chunked) {
return readChunkedBody(reader, allocator);
}

// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTTP spec violation: When both Content-Length and Transfer-Encoding: chunked headers are present, the code prioritizes Content-Length (line 186). According to RFC 7230 Section 3.3.3, Transfer-Encoding takes precedence and Content-Length must be ignored. This could cause incorrect parsing if a malicious or buggy server sends both headers. The condition should check for chunked encoding first.

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +345 to +358
// imagePull returns a streaming JSON progress response.
// We use a streaming reader to drain it without buffering the whole body,
// which avoids hitting max_response_size for large image pulls.
var resp = try self.client.fetch(url, .{
.method = .post,
.unix_socket_path = self.socket_path,
.decompress = false,
});
defer resp.deinit();

const sc: u16 = @as(u16, @intCast(@intFromEnum(resp.status())));
if (sc != 200) return DockerClientError.ApiError;
// We open a dedicated connection and drain the stream to wait for
// completion without buffering the entire (potentially huge) body.
const stream = try std.net.connectUnixSocket(self.socket_path);
defer stream.close();

try sendHttpRequest(stream, .post, api_path, null, null);

var reader: HttpReader = .{ .stream = stream };
const meta = try parseResponseHead(&reader);
if (meta.status_code != 200) return DockerClientError.ApiError;

// Drain the progress stream to wait for completion.
const r = resp.reader();
var buf: [4096]u8 = undefined;
while (true) {
const n = r.readSliceShort(&buf) catch break;
if (n == 0) break;
}
reader.drain();
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error handling issue: imagePull drains the response stream without parsing it for errors. The Docker API returns 200 OK even for image pull failures, embedding error information in the JSON stream. Silently draining means pull failures (e.g., image not found, authentication required) will not be detected. Consider parsing at least the last JSON object from the stream or buffering the last few bytes to check for error messages.

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Comment on lines +48 to +255
/// Buffered reader over a raw `std.net.Stream` for incremental HTTP
/// response parsing. Avoids one-byte syscalls by reading into an
/// internal 8 KiB buffer.
const HttpReader = struct {
stream: std.net.Stream,
buf: [8192]u8 = undefined,
pos: usize = 0,
len: usize = 0,

/// Read a single byte from the buffered stream.
fn readByte(self: *HttpReader) !u8 {
if (self.pos >= self.len) {
self.len = try self.stream.read(&self.buf);
self.pos = 0;
if (self.len == 0) return error.EndOfStream;
}
const b = self.buf[self.pos];
self.pos += 1;
return b;
}

/// Read exactly `dest.len` bytes from the buffered stream.
fn readExact(self: *HttpReader, dest: []u8) !void {
var written: usize = 0;
while (written < dest.len) {
if (self.pos >= self.len) {
self.len = try self.stream.read(&self.buf);
self.pos = 0;
if (self.len == 0) return error.EndOfStream;
}
const available = self.len - self.pos;
const needed = dest.len - written;
const to_copy = @min(available, needed);
@memcpy(dest[written .. written + to_copy], self.buf[self.pos .. self.pos + to_copy]);
written += to_copy;
self.pos += to_copy;
}
}

/// Read a line terminated by `\n`. Returns the line contents without
/// the trailing `\r\n`. Returns `null` when EOF is reached with no data.
fn readLine(self: *HttpReader, out: []u8) !?[]const u8 {
var i: usize = 0;
while (i < out.len) {
const b = self.readByte() catch |err| {
if (err == error.EndOfStream) {
return if (i == 0) null else out[0..i];
}
return err;
};
if (b == '\n') {
const end = if (i > 0 and out[i - 1] == '\r') i - 1 else i;
return out[0..end];
}
out[i] = b;
i += 1;
}
return error.HttpHeaderTooLong;
}

/// Discard all remaining data until EOF.
fn drain(self: *HttpReader) void {
self.pos = self.len;
while (true) {
self.len = self.stream.read(&self.buf) catch return;
if (self.len == 0) return;
}
}
};

/// Send an HTTP/1.1 request over a raw stream.
fn sendHttpRequest(
stream: std.net.Stream,
method: Method,
path: []const u8,
content_type: ?[]const u8,
body: ?[]const u8,
) !void {
var hdr_buf: [4096]u8 = undefined;
var fbs = std.io.fixedBufferStream(&hdr_buf);
const w = fbs.writer();

try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path });
try w.print("Host: localhost\r\n", .{});
if (content_type) |ct| {
try w.print("Content-Type: {s}\r\n", .{ct});
}
if (body) |b| {
try w.print("Content-Length: {d}\r\n", .{b.len});
}
try w.print("Connection: close\r\n\r\n", .{});

try stream.writeAll(fbs.getWritten());
if (body) |b| {
try stream.writeAll(b);
}
}

/// Parse the HTTP response status line and headers.
fn parseResponseHead(reader: *HttpReader) !ResponseMeta {
var line_buf: [8192]u8 = undefined;

// Status line: "HTTP/1.x NNN ..."
const status_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse;
const first_space = std.mem.indexOfScalar(u8, status_line, ' ') orelse return error.InvalidResponse;
const after_space = status_line[first_space + 1 ..];
if (after_space.len < 3) return error.InvalidResponse;
const status_code = std.fmt.parseInt(u16, after_space[0..3], 10) catch return error.InvalidResponse;

var content_length: ?usize = null;
var chunked = false;

while (true) {
const header_line = try reader.readLine(&line_buf) orelse break;
if (header_line.len == 0) break;

const colon = std.mem.indexOfScalar(u8, header_line, ':') orelse continue;
const hdr_name = std.mem.trim(u8, header_line[0..colon], " ");
const hdr_value = std.mem.trim(u8, header_line[colon + 1 ..], " ");

if (std.ascii.eqlIgnoreCase(hdr_name, "content-length")) {
content_length = std.fmt.parseInt(usize, hdr_value, 10) catch null;
} else if (std.ascii.eqlIgnoreCase(hdr_name, "transfer-encoding")) {
if (std.ascii.eqlIgnoreCase(hdr_value, "chunked")) {
chunked = true;
}
}
}

return .{
.status_code = status_code,
.content_length = content_length,
.chunked = chunked,
};
}

/// Read the full response body according to the parsed metadata.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}

if (meta.chunked) {
return readChunkedBody(reader, allocator);
}

// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
}

/// Decode an HTTP chunked transfer-encoded body.
fn readChunkedBody(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 {
var body: std.ArrayList(u8) = .empty;
errdefer body.deinit(allocator);

var line_buf: [128]u8 = undefined;

while (true) {
const size_line = try reader.readLine(&line_buf) orelse break;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;

if (chunk_size == 0) {
// Drain optional trailers.
while (true) {
const trailer = try reader.readLine(&line_buf) orelse break;
if (trailer.len == 0) break;
}
break;
}

const old_len = body.items.len;
try body.ensureTotalCapacity(allocator, old_len + chunk_size);
body.items.len = old_len + chunk_size;
try reader.readExact(body.items[old_len..]);

// Consume trailing \r\n after chunk data.
_ = try reader.readLine(&line_buf);
}

return body.toOwnedSlice(allocator);
}

/// Read until the peer closes the connection.
fn readUntilClose(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 {
var body: std.ArrayList(u8) = .empty;
errdefer body.deinit(allocator);

// Flush anything the HttpReader already buffered.
if (reader.pos < reader.len) {
try body.appendSlice(allocator, reader.buf[reader.pos..reader.len]);
reader.pos = reader.len;
}

var tmp: [8192]u8 = undefined;
while (true) {
const n = reader.stream.read(&tmp) catch break;
if (n == 0) break;
try body.appendSlice(allocator, tmp[0..n]);
}

return body.toOwnedSlice(allocator);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test coverage: The custom HTTP/1.1 implementation (HttpReader, sendHttpRequest, parseResponseHead, readChunkedBody, readUntilClose) lacks unit tests. These functions handle complex parsing logic with multiple edge cases (chunked encoding, connection close, header parsing) that should be tested independently. Consider adding unit tests that mock socket streams to verify correct handling of various HTTP responses, including malformed inputs, edge cases like empty bodies, large chunks, invalid chunk sizes, and header line limits.

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Copy link
Contributor

Copilot AI commented Feb 24, 2026

@dragosv I've opened a new pull request, #2, to work on those changes. Once the pull request is ready, I'll request review from you.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Replaces external dusty HTTP client and zio runtime with a built-in HTTP/1.1 client over Docker Unix domain sockets; implements low-level request/response parsing and streaming, updates examples/tests/build/docs accordingly, and adds a macOS Colima Docker setup step to CI.

Changes

Cohort / File(s) Summary
CI / Build
/.github/workflows/ci.yml, build.zig, build.zig.zon
Removed dusty/zio from build config and Zon; added macOS-specific Colima/Docker install & start step guarded by if: runner.os == 'macOS'.
Docs & Guides
AGENTS.md, ARCHITECTURE.md, IMPLEMENTATION_GUIDE.md, PROJECT_SUMMARY.md, QUICKSTART.md, README.md
Removed references to dusty and zio; rewrote descriptions, diagrams, and examples to document a stdlib-based HTTP/1.1 client over Unix sockets and zero external dependencies.
Core HTTP Implementation
src/docker_client.zig
Replaced dusty-based client with raw Unix-socket HTTP implementation: added Method, ResponseMeta, HttpReader, send/parse/read functions, streaming/chunked handling, updated DockerClient API and error mapping; removed dusty/apiUrl usage; added related tests.
Wait & Polling
src/wait.zig
Switched from dusty client to std.http.Client-style fetch logic and adapted retry/status handling to stdlib fetch results.
Examples & Tests
examples/basic.zig, src/integration_test.zig
Removed zio runtime initialization and dusty usage; replaced with stdlib/socket HTTP flows and simplified test runtime wiring; retained Docker availability guards.
Runtime & Root
src/root.zig
DockerProvider.init now derives/normalizes Unix socket path from DOCKER_HOST (supports unix://), forwards to socket-based init; docs updated to reflect socket transport and no external runtime.
Misc Source Updates
src/...
Minor comment/source updates across repo to remove external dependency references and adjust public surface where dusty/zio were previously exposed.

Sequence Diagram(s)

sequenceDiagram
  participant Client as "DockerClient (std)"
  participant Socket as "Unix Socket\n(std.net)"
  participant Daemon as "Docker Engine\n(daemon)"
  Client->>Socket: connectUnixSocket(path)
  Note right of Socket: stream established
  Client->>Socket: sendHttpRequest(method, path, headers, body)
  Socket->>Daemon: deliver HTTP/1.1 bytes
  Daemon-->>Socket: respond HTTP/1.1 head + body (chunked/length/close)
  Socket-->>Client: stream bytes to HttpReader
  Client->>Client: parseResponseHead()\nreadResponseBody()/readChunkedBody()
  Client-->>Client: return response bytes / metadata
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 I nibble sockets, stitch each byte,
Dusty gone, the net hums light,
No runtime bells — just steady hops,
Chunked replies and tiny clops,
A rabbit cheers — the build delights! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title is vague and generic, using non-descriptive terms that don't clearly convey the main purpose of the changeset. Expand the title to be more specific about the refactoring scope, e.g., 'Remove dusty dependency and use Unix sockets for Docker communication' or 'Refactor to use native HTTP client over Unix sockets'.
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Description check ✅ Passed The pull request description is comprehensive and well-structured, covering objectives, motivation, changes summary, key components, and testing approach.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch unixsocket

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
src/wait.zig (1)

215-217: Hoist std.http.Client out of the polling loop to avoid per-iteration allocation

var client is declared inside the while body, so the entire client — including its connection pool — is allocated and torn down on every poll iteration. No connections are ever reused, and the allocator is hit once per retry.

Move the client to the scope of waitHttp so the pool is created once and connections are reused across retries:

♻️ Proposed refactor
+    var client: std.http.Client = .{ .allocator = alloc };
+    defer client.deinit();
+
     while (std.time.nanoTimestamp() < deadline) {
-        var client: std.http.Client = .{ .allocator = alloc };
-        defer client.deinit();
-
         const result = client.fetch(.{
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` around lines 215 - 217, The std.http.Client instance is being
allocated inside the polling loop in waitHttp (var client = .{ .allocator =
alloc }; with defer client.deinit()), causing the connection pool to be
recreated each iteration; move the client declaration and initialization out of
the while loop into the outer scope of waitHttp so the pool is created once,
reuse that client for all retries, and call client.deinit() (or otherwise clean
it up) after the loop completes rather than deferring inside the loop; update
references to the client variable in the loop body accordingly.
.github/workflows/ci.yml (1)

25-26: colima start has no timeout; consider caching Homebrew packages

Two operational concerns with the macOS Docker bootstrap step:

  1. colima start --memory 4 --cpu 2 has no explicit timeout. If the VM provisioning hangs (e.g. first-run download of the VM image), the step will block indefinitely and the job will only be cancelled when GitHub Actions hits its job-level timeout (6 h by default). Pass --timeout <minutes> to bound the failure window.

  2. brew install docker docker-compose colima re-installs from scratch on every CI run. Each run on a fresh macOS runner pays the full download+install cost. Consider caching via actions/cache on the Homebrew cache directory (~/Library/Caches/Homebrew) to speed up subsequent runs.

🛠️ Suggested hardening for the colima start line
-          colima start --memory 4 --cpu 2
+          colima start --memory 4 --cpu 2 --timeout 10
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 25 - 26, The workflow currently runs
"colima start --memory 4 --cpu 2" with no timeout and always runs "brew install
docker docker-compose colima"; update the CI step to add an explicit timeout
flag to the colima invocation (e.g. "--timeout <minutes>") so provisioning
cannot hang indefinitely, and add an actions/cache entry for Homebrew cache
(~/Library/Caches/Homebrew) so the "brew install docker docker-compose colima"
step uses cached bottles between runs to avoid re-downloading and speed CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/docker_client.zig`:
- Around line 184-199: The function readResponseBody currently prefers
meta.content_length over meta.chunked which violates HTTP/1.1; update
readResponseBody so that if meta.chunked is true it calls readChunkedBody
immediately (ignoring content_length), otherwise fall back to content_length
handling or readUntilClose; adjust logic around meta.content_length,
meta.chunked, and calls to readChunkedBody/readUntilClose in the
readResponseBody function to ensure chunked decoding wins.
- Around line 1189-1216: The tests for sendHttpRequest read the pipe only once
which can return a partial read; update both tests ("sendHttpRequest: writes a
valid GET request" and "sendHttpRequest: writes Content-Type and Content-Length
for POST with body") to read from the read-end until EOF instead of a single
std.posix.read call: add a small helper (e.g., readAllFromFd or readUntilEof)
that repeatedly reads into a buffer/appends into a slice until read returns 0,
then use that helper to produce the complete req slice checked by the
assertions; reference the existing tests/sendHttpRequest usage to locate where
to replace the single-read logic.
- Around line 209-214: readChunkedBody currently treats parse failures as a
normal loop break and returns partial data; instead detect invalid chunk-size
lines and return an explicit error so callers can detect protocol corruption. In
the loop that uses reader.readLine(&line_buf) to produce size_line and computes
semi, trimmed and chunk_size (the std.fmt.parseInt call), replace the "catch
break" behavior with returning a descriptive error (e.g., error.InvalidChunkSize
or a new error variant) that includes context (the offending size_line or
trimmed slice); likewise if std.mem.indexOfScalar or trimming yields unexpected
results, return an error rather than silently breaking so the caller of
readChunkedBody can fail-fast on malformed chunked bodies.

In `@src/wait.zig`:
- Line 221: The current assignment .method = if (std.mem.eql(u8, s.method,
"POST")) .POST else .GET silently treats any non-"POST" HttpStrategy.method
string as GET; update the handling in src/wait.zig by either (A) validating
s.method against known verbs (e.g., "GET","POST","PUT","DELETE","PATCH","HEAD",
etc.) and returning an error for unrecognized values instead of defaulting to
.GET, or (B) change HttpStrategy.method to type std.http.Method and adjust
callers so you can assign the enum directly (removing the string comparisons);
locate references to HttpStrategy.method and the expression using
std.mem.eql(u8, s.method, "POST") to implement the chosen fix and ensure callers
are updated to propagate errors or use std.http.Method.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 25-26: The workflow currently runs "colima start --memory 4 --cpu
2" with no timeout and always runs "brew install docker docker-compose colima";
update the CI step to add an explicit timeout flag to the colima invocation
(e.g. "--timeout <minutes>") so provisioning cannot hang indefinitely, and add
an actions/cache entry for Homebrew cache (~/Library/Caches/Homebrew) so the
"brew install docker docker-compose colima" step uses cached bottles between
runs to avoid re-downloading and speed CI.

In `@src/wait.zig`:
- Around line 215-217: The std.http.Client instance is being allocated inside
the polling loop in waitHttp (var client = .{ .allocator = alloc }; with defer
client.deinit()), causing the connection pool to be recreated each iteration;
move the client declaration and initialization out of the while loop into the
outer scope of waitHttp so the pool is created once, reuse that client for all
retries, and call client.deinit() (or otherwise clean it up) after the loop
completes rather than deferring inside the loop; update references to the client
variable in the loop body accordingly.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2c27d6 and 89adad0.

📒 Files selected for processing (14)
  • .github/workflows/ci.yml
  • AGENTS.md
  • ARCHITECTURE.md
  • IMPLEMENTATION_GUIDE.md
  • PROJECT_SUMMARY.md
  • QUICKSTART.md
  • README.md
  • build.zig
  • build.zig.zon
  • examples/basic.zig
  • src/docker_client.zig
  • src/integration_test.zig
  • src/root.zig
  • src/wait.zig
💤 Files with no reviewable changes (4)
  • build.zig
  • QUICKSTART.md
  • README.md
  • src/integration_test.zig

Comment on lines +1189 to +1216
test "sendHttpRequest: writes a valid GET request" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null);
std.posix.close(fds[1]);
var buf: [4096]u8 = undefined;
const n = try std.posix.read(fds[0], &buf);
const req = buf[0..n];
try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null);
}

test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
const body = "{\"Image\":\"alpine\"}";
try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body);
std.posix.close(fds[1]);
var buf: [4096]u8 = undefined;
const n = try std.posix.read(fds[0], &buf);
const req = buf[0..n];
try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null);
try std.testing.expect(std.mem.endsWith(u8, req, body));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CI failure: pipe reads can be partial—read until EOF in sendHttpRequest tests.

The failing test reads the pipe once; a short read can miss bytes and cause assertions to inspect incomplete data. This aligns with the CI failure reported for docker_client.test.sendHttpRequest. Reading until EOF will make the test deterministic.

🧪 Suggested fix (add helper + use it in both sendHttpRequest tests)
+fn readAllFromFd(fd: std.posix.fd_t, allocator: std.mem.Allocator) ![]u8 {
+    var out: std.ArrayList(u8) = .empty;
+    errdefer out.deinit(allocator);
+    var buf: [4096]u8 = undefined;
+    while (true) {
+        const n = try std.posix.read(fd, &buf);
+        if (n == 0) break;
+        try out.appendSlice(allocator, buf[0..n]);
+    }
+    return out.toOwnedSlice(allocator);
+}
+
 test "sendHttpRequest: writes a valid GET request" {
     const fds = try std.posix.pipe();
     defer std.posix.close(fds[0]);
     const write_stream = std.net.Stream{ .handle = fds[1] };
     try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null);
     std.posix.close(fds[1]);
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
     try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n"));
     try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null);
     try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null);
 }
 
 test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" {
     const fds = try std.posix.pipe();
     defer std.posix.close(fds[0]);
     const write_stream = std.net.Stream{ .handle = fds[1] };
     const body = "{\"Image\":\"alpine\"}";
     try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body);
     std.posix.close(fds[1]);
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
     try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n"));
     try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null);
     try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null);
     try std.testing.expect(std.mem.endsWith(u8, req, body));
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 1189 - 1216, The tests for
sendHttpRequest read the pipe only once which can return a partial read; update
both tests ("sendHttpRequest: writes a valid GET request" and "sendHttpRequest:
writes Content-Type and Content-Length for POST with body") to read from the
read-end until EOF instead of a single std.posix.read call: add a small helper
(e.g., readAllFromFd or readUntilEof) that repeatedly reads into a
buffer/appends into a slice until read returns 0, then use that helper to
produce the complete req slice checked by the assertions; reference the existing
tests/sendHttpRequest usage to locate where to replace the single-read logic.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/wait.zig (1)

221-221: ⚠️ Potential issue | 🟠 Major

Non-POST methods are still silently coerced to GET.

Line 221 keeps the same behavior: any method other than "POST" is sent as GET, which breaks valid values like "HEAD"/"PUT"/"DELETE".

🐛 Proposed fix
 fn waitHttp(s: HttpStrategy, target: StrategyTarget, alloc: std.mem.Allocator) !void {
@@
     const scheme = if (s.use_tls) "https" else "http";
     const url = try std.fmt.allocPrint(alloc, "{s}://{s}:{d}{s}", .{ scheme, host, mapped_port, s.path });
     defer alloc.free(url);
+    const method = std.meta.stringToEnum(std.http.Method, s.method) orelse
+        return error.InvalidHttpMethod;
@@
         const result = client.fetch(.{
             .location = .{ .url = url },
-            .method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET,
+            .method = method,
         }) catch {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` at line 221, The current assignment for .method coerces any
non-"POST" value to GET, losing valid methods like HEAD/PUT/DELETE; update the
mapping logic that assigns .method (using the s.method string) to explicitly
handle all expected HTTP verbs (e.g.,
"GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or convert the input
string into the library's HTTP method enum instead of defaulting to GET; locate
the code that reads s.method and change the conditional to a proper
switch/lookup that preserves the provided method (or uses a generic/custom enum
variant) so non-POST methods are not silently turned into GET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/wait.zig`:
- Line 221: The current assignment for .method coerces any non-"POST" value to
GET, losing valid methods like HEAD/PUT/DELETE; update the mapping logic that
assigns .method (using the s.method string) to explicitly handle all expected
HTTP verbs (e.g., "GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or
convert the input string into the library's HTTP method enum instead of
defaulting to GET; locate the code that reads s.method and change the
conditional to a proper switch/lookup that preserves the provided method (or
uses a generic/custom enum variant) so non-POST methods are not silently turned
into GET.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 89adad0 and ee5865c.

📒 Files selected for processing (14)
  • .github/workflows/ci.yml
  • AGENTS.md
  • ARCHITECTURE.md
  • IMPLEMENTATION_GUIDE.md
  • PROJECT_SUMMARY.md
  • QUICKSTART.md
  • README.md
  • build.zig
  • build.zig.zon
  • examples/basic.zig
  • src/docker_client.zig
  • src/integration_test.zig
  • src/root.zig
  • src/wait.zig
💤 Files with no reviewable changes (4)
  • src/integration_test.zig
  • QUICKSTART.md
  • README.md
  • build.zig

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
src/docker_client.zig (3)

185-196: ⚠️ Potential issue | 🟠 Major

Prioritize chunked decoding over Content-Length.

readResponseBody still checks content_length first. If both headers are present, chunked must win.

Suggested fix
 fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
+    if (meta.chunked) {
+        return readChunkedBody(reader, allocator);
+    }
     if (meta.content_length) |cl| {
         if (cl == 0) return allocator.dupe(u8, "");
         const body_buf = try allocator.alloc(u8, cl);
         errdefer allocator.free(body_buf);
         try reader.readExact(body_buf);
         return body_buf;
     }
-
-    if (meta.chunked) {
-        return readChunkedBody(reader, allocator);
-    }
 
     // No Content-Length and not chunked — read until connection close.
     return readUntilClose(reader, allocator);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 185 - 196, The function readResponseBody
currently checks meta.content_length before meta.chunked; change the logic to
prefer chunked encoding when meta.chunked is true even if content_length is
present by checking meta.chunked first (call readChunkedBody when meta.chunked)
and only falling back to the content_length branch afterwards; update the
control flow in readResponseBody to call readChunkedBody when meta.chunked and
retain the existing content_length handling (including allocator behavior) as
the fallback.

1189-1216: ⚠️ Potential issue | 🟡 Minor

sendHttpRequest tests still use single-shot pipe reads (flaky).

Lines [1196] and [1211] read only once from the pipe; short reads can truncate req and make assertions nondeterministic.

#!/bin/bash
set -euo pipefail

# Verify both sendHttpRequest tests still use one-shot pipe reads.
rg -n -C3 'test "sendHttpRequest:' src/docker_client.zig
rg -n -C2 'const n = try std.posix.read\(fds\[0\], &buf\);' src/docker_client.zig
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 1189 - 1216, The tests for
sendHttpRequest use a single one-shot std.posix.read into buf (variables buf, n,
req) which can produce short reads and flaky assertions; change both tests to
repeatedly read in a loop until read returns 0 (EOF) or an error, appending each
chunk into a dynamic buffer (e.g., std.ArrayList(u8) or a growable slice) and
only then form req from the accumulated bytes before making assertions; update
the read logic in the tests that call sendHttpRequest (the blocks that currently
do "const n = try std.posix.read(fds[0], &buf);" and then "const req =
buf[0..n];") to this looped-read-and-accumulate approach so the assertions
always see the full request.

210-214: ⚠️ Potential issue | 🟠 Major

Do not treat invalid chunk-size lines as normal EOF.

On malformed chunk metadata, the function currently breaks and returns partial data. This should fail fast with an explicit protocol error.

Suggested fix
     while (true) {
-        const size_line = try reader.readLine(&line_buf) orelse break;
+        const size_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse;
         const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
         const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
-        const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;
+        const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch return error.InvalidResponse;

(Downstream: the test around Lines [1146-1154] should then assert an error instead of partial success.)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 210 - 214, The code currently treats
malformed or prematurely-terminated chunk-size lines as normal EOF by using
"orelse break" and "catch break", returning partial data; change this to fail
fast with an explicit protocol error: when reader.readLine(&line_buf) yields
none, when std.mem.indexOfScalar/trim results are invalid, or when
std.fmt.parseInt(usize, trimmed, 16) fails, return a specific error (e.g.,
ProtocolError or UnexpectedChunkSize) instead of breaking; update the logic
around size_line, semi, trimmed, and chunk_size to propagate that error so
callers see a clear protocol failure rather than partial success.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/docker_client.zig`:
- Around line 109-114: The drain function currently swallows all read errors
(using "catch return") so imagePull treats a broken stream as success; change
HttpReader.drain to return an error outcome (e.g., error!void or any suitable
Error type) and propagate read errors instead of suppressing them, update the
read loop in drain (remove "catch return" and return the caught error) and
adjust callers (notably imagePull and the other drain occurrence around the
later block) to handle/propagate the error so transport failures surface to the
caller.

In `@src/root.zig`:
- Around line 87-92: The code currently accepts any DOCKER_HOST value and
assigns it to socket, which breaks connectUnixSocket when DOCKER_HOST uses
non-unix schemes; update the getenv handling (std.posix.getenv and
std.mem.startsWith usage) to explicitly allow only values that start with
"unix://" and strip that prefix into socket, and otherwise return or log a clear
error and abort (reject schemes like "tcp://" or "ssh://"); ensure the variable
socket is only set for a valid filesystem path and add a guard before calling
connectUnixSocket to avoid passing URIs.

---

Duplicate comments:
In `@src/docker_client.zig`:
- Around line 185-196: The function readResponseBody currently checks
meta.content_length before meta.chunked; change the logic to prefer chunked
encoding when meta.chunked is true even if content_length is present by checking
meta.chunked first (call readChunkedBody when meta.chunked) and only falling
back to the content_length branch afterwards; update the control flow in
readResponseBody to call readChunkedBody when meta.chunked and retain the
existing content_length handling (including allocator behavior) as the fallback.
- Around line 1189-1216: The tests for sendHttpRequest use a single one-shot
std.posix.read into buf (variables buf, n, req) which can produce short reads
and flaky assertions; change both tests to repeatedly read in a loop until read
returns 0 (EOF) or an error, appending each chunk into a dynamic buffer (e.g.,
std.ArrayList(u8) or a growable slice) and only then form req from the
accumulated bytes before making assertions; update the read logic in the tests
that call sendHttpRequest (the blocks that currently do "const n = try
std.posix.read(fds[0], &buf);" and then "const req = buf[0..n];") to this
looped-read-and-accumulate approach so the assertions always see the full
request.
- Around line 210-214: The code currently treats malformed or
prematurely-terminated chunk-size lines as normal EOF by using "orelse break"
and "catch break", returning partial data; change this to fail fast with an
explicit protocol error: when reader.readLine(&line_buf) yields none, when
std.mem.indexOfScalar/trim results are invalid, or when std.fmt.parseInt(usize,
trimmed, 16) fails, return a specific error (e.g., ProtocolError or
UnexpectedChunkSize) instead of breaking; update the logic around size_line,
semi, trimmed, and chunk_size to propagate that error so callers see a clear
protocol failure rather than partial success.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8ad204a3-9140-4435-bafa-fc6fc8871d12

📥 Commits

Reviewing files that changed from the base of the PR and between ee5865c and 04f599e.

📒 Files selected for processing (2)
  • src/docker_client.zig
  • src/root.zig

dragosv and others added 3 commits March 5, 2026 08:17
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI commented Mar 5, 2026

@dragosv I've opened a new pull request, #4, to work on those changes. Once the pull request is ready, I'll request review from you.

dragosv and others added 2 commits March 5, 2026 08:18
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Copy link
Contributor

Copilot AI commented Mar 5, 2026

@dragosv I've opened a new pull request, #5, to work on those changes. Once the pull request is ready, I'll request review from you.

Copy link
Contributor

Copilot AI commented Mar 5, 2026

@dragosv I've opened a new pull request, #6, to work on those changes. Once the pull request is ready, I'll request review from you.

dragosv and others added 2 commits March 5, 2026 08:19
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
dragosv and others added 9 commits March 5, 2026 08:20
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…7230 §3.3.3

Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Co-authored-by: dragosv <422243+dragosv@users.noreply.github.com>
Parse Docker pull stream for embedded errors in imagePull
Replace fixed header buffer with dynamic ArrayList in sendHttpRequest
Fix RFC 7230 §3.3.3 violation: Transfer-Encoding takes precedence over Content-Length
@dragosv
Copy link
Owner Author

dragosv commented Mar 5, 2026

@copilot remove line 206 from docker_client.zig, line that fails the build

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
src/docker_client.zig (1)

1238-1255: ⚠️ Potential issue | 🟠 Major

Pipe reads in sendHttpRequest tests are still single-shot and flaky.

A single std.posix.read may return partial request bytes, so these assertions can intermittently inspect truncated data. This was already flagged earlier and remains unresolved.

🔧 Proposed fix
+fn readAllFromFd(fd: std.posix.fd_t, allocator: std.mem.Allocator) ![]u8 {
+    var out: std.ArrayList(u8) = .empty;
+    errdefer out.deinit(allocator);
+    var buf: [4096]u8 = undefined;
+    while (true) {
+        const n = try std.posix.read(fd, &buf);
+        if (n == 0) break;
+        try out.appendSlice(allocator, buf[0..n]);
+    }
+    return out.toOwnedSlice(allocator);
+}
@@
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
@@
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 1238 - 1255, The tests for
sendHttpRequest use a single std.posix.read which can return partial data;
update the test logic in both tests (the "sendHttpRequest: writes Content-Type
and Content-Length for POST with body" and the earlier GET/_ping test) to
repeatedly read from fds[0] into a growing buffer until read returns 0 (EOF) or
an error, accumulating bytes before computing req and running std.testing.expect
checks; ensure the write end (fds[1]) is closed before finishing reads so the
loop will see EOF, and reference sendHttpRequest, the test names, and the
fds/posix.read usage when making the change.
src/wait.zig (1)

221-225: ⚠️ Potential issue | 🟡 Minor

Unknown/unsupported HTTP methods still silently fall back to GET.

This still converts invalid method strings into a different verb instead of failing fast, which can hide configuration mistakes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` around lines 221 - 225, The current mapping of s.method to the
HTTP verb silently falls back to .GET when the method string is unrecognized;
change the mapping so unknown methods do not default but instead return or
propagate an explicit error (e.g., error.InvalidMethod) from the function that
builds the request. Concretely, replace the final "else .GET" in the .method
assignment for s.method with an "else return error.InvalidMethod" (or set to a
sentinel like .INVALID and immediately return error.InvalidMethod) and update
the caller to handle/propagate that error so invalid method strings fail fast
instead of becoming GET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/docker_client.zig`:
- Around line 386-401: The image pull code currently calls readResponseBody and
stores pull_body, buffering the entire progress stream; instead, modify
imagePull to consume the HttpReader stream incrementally after parseResponseHead
(i.e., after sendHttpRequest, parseResponseHead, and meta handling) by reading
from the reader/stream in chunks or lines (using HttpReader/stream APIs) and
scanning each newline-delimited JSON object for an "error" field, freeing
per-line allocations and returning immediately on the first found error; remove
the readResponseBody allocation/pull_body usage and ensure stream.close/defer
semantics remain intact so large pulls don't allocate the whole body.
- Around line 130-132: In sendHttpRequest, replace the outdated ArrayList init
with the Zig 0.15.2 pattern: declare hdr_buf as std.ArrayList(u8).empty and
ensure you call hdr_buf.deinit(allocator) and use hdr_buf.writer(allocator)
(matching other instances at lines using .empty); update the hdr_buf variable,
its deinit call, and the writer() invocation accordingly so the ArrayList
follows the .empty + deinit(allocator)/writer(allocator) API (symbols: hdr_buf,
sendHttpRequest, hdr_buf.deinit, hdr_buf.writer).

---

Duplicate comments:
In `@src/docker_client.zig`:
- Around line 1238-1255: The tests for sendHttpRequest use a single
std.posix.read which can return partial data; update the test logic in both
tests (the "sendHttpRequest: writes Content-Type and Content-Length for POST
with body" and the earlier GET/_ping test) to repeatedly read from fds[0] into a
growing buffer until read returns 0 (EOF) or an error, accumulating bytes before
computing req and running std.testing.expect checks; ensure the write end
(fds[1]) is closed before finishing reads so the loop will see EOF, and
reference sendHttpRequest, the test names, and the fds/posix.read usage when
making the change.

In `@src/wait.zig`:
- Around line 221-225: The current mapping of s.method to the HTTP verb silently
falls back to .GET when the method string is unrecognized; change the mapping so
unknown methods do not default but instead return or propagate an explicit error
(e.g., error.InvalidMethod) from the function that builds the request.
Concretely, replace the final "else .GET" in the .method assignment for s.method
with an "else return error.InvalidMethod" (or set to a sentinel like .INVALID
and immediately return error.InvalidMethod) and update the caller to
handle/propagate that error so invalid method strings fail fast instead of
becoming GET.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6c7d9f3f-d285-4661-a7a9-202f70a9a9f8

📥 Commits

Reviewing files that changed from the base of the PR and between cf42c82 and 4e22390.

📒 Files selected for processing (3)
  • examples/basic.zig
  • src/docker_client.zig
  • src/wait.zig

Comment on lines +130 to +132
var hdr_buf = std.ArrayList(u8).init(allocator);
defer hdr_buf.deinit();
const w = hdr_buf.writer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the specific file and lines in question
cd /tmp && find . -name "src" -type d 2>/dev/null | head -5

Repository: dragosv/testcontainers-zig

Length of output: 52


🏁 Script executed:

# Check if we're in the right repo structure
ls -la | head -20

Repository: dragosv/testcontainers-zig

Length of output: 1186


🏁 Script executed:

# Look for the docker_client.zig file
find . -name "docker_client.zig" -type f 2>/dev/null

Repository: dragosv/testcontainers-zig

Length of output: 93


🏁 Script executed:

# Let's examine the actual file content around lines 130-132
cat -n src/docker_client.zig | sed -n '120,145p'

Repository: dragosv/testcontainers-zig

Length of output: 991


🏁 Script executed:

# Check the broader context of sendHttpRequest function
grep -n "fn sendHttpRequest" src/docker_client.zig

Repository: dragosv/testcontainers-zig

Length of output: 93


🏁 Script executed:

# Look for other ArrayList usage patterns in the file
rg "ArrayList" src/docker_client.zig -n -C 2

Repository: dragosv/testcontainers-zig

Length of output: 1160


🏁 Script executed:

# Check for .zig.zon or build.zig to understand Zig version
cat build.zig.zon 2>/dev/null || cat build.zig 2>/dev/null | head -30

Repository: dragosv/testcontainers-zig

Length of output: 312


🏁 Script executed:

# Check how writer() is called on ArrayList instances
rg "\.writer\(" src/docker_client.zig -n -B 2 -A 1

Repository: dragosv/testcontainers-zig

Length of output: 199


🏁 Script executed:

# Look at the full usage of hdr_buf in sendHttpRequest
cat -n src/docker_client.zig | sed -n '130,150p'

Repository: dragosv/testcontainers-zig

Length of output: 820


Update ArrayList initialization in sendHttpRequest to match Zig 0.15.2 API used elsewhere in the file.

The current code at lines 130-132 uses an outdated initialization pattern. All other ArrayList instances in the file (lines 210, 245, 547, 974) use the .empty pattern with allocator passed to deinit() and writer(). This inconsistency breaks compilation with Zig 0.15.2.

🔧 Proposed fix
-    var hdr_buf = std.ArrayList(u8).init(allocator);
-    defer hdr_buf.deinit();
-    const w = hdr_buf.writer();
+    var hdr_buf: std.ArrayList(u8) = .empty;
+    defer hdr_buf.deinit(allocator);
+    var w = hdr_buf.writer(allocator);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var hdr_buf = std.ArrayList(u8).init(allocator);
defer hdr_buf.deinit();
const w = hdr_buf.writer();
var hdr_buf: std.ArrayList(u8) = .empty;
defer hdr_buf.deinit(allocator);
var w = hdr_buf.writer(allocator);
🧰 Tools
🪛 GitHub Actions: CI/CD

[error] 130-130: Zig compile error: struct 'array_list.Aligned(u8,null)' has no member named 'init'. This occurs during the 'zig build' step.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 130 - 132, In sendHttpRequest, replace
the outdated ArrayList init with the Zig 0.15.2 pattern: declare hdr_buf as
std.ArrayList(u8).empty and ensure you call hdr_buf.deinit(allocator) and use
hdr_buf.writer(allocator) (matching other instances at lines using .empty);
update the hdr_buf variable, its deinit call, and the writer() invocation
accordingly so the ArrayList follows the .empty +
deinit(allocator)/writer(allocator) API (symbols: hdr_buf, sendHttpRequest,
hdr_buf.deinit, hdr_buf.writer).

Comment on lines +386 to +401
// We open a dedicated connection and drain the stream to wait for
// completion without buffering the entire (potentially huge) body.
const stream = try std.net.connectUnixSocket(self.socket_path);
defer stream.close();

try sendHttpRequest(self.allocator, stream, .post, api_path, null, null);

var reader: HttpReader = .{ .stream = stream };
const meta = try parseResponseHead(&reader);
if (meta.status_code != 200) return DockerClientError.ApiError;

// Docker returns 200 OK even for pull failures; errors are embedded as
// {"error":"..."} JSON objects in the newline-delimited progress stream.
// Read the full stream and scan each line for an error field.
const pull_body = try readResponseBody(&reader, meta, self.allocator);
defer self.allocator.free(pull_body);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

imagePull still buffers the entire progress stream despite comment saying otherwise.

This path allocates the full pull stream (readResponseBody) before scanning for errors, which can become very large and cause avoidable memory pressure on big image pulls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 386 - 401, The image pull code currently
calls readResponseBody and stores pull_body, buffering the entire progress
stream; instead, modify imagePull to consume the HttpReader stream incrementally
after parseResponseHead (i.e., after sendHttpRequest, parseResponseHead, and
meta handling) by reading from the reader/stream in chunks or lines (using
HttpReader/stream APIs) and scanning each newline-delimited JSON object for an
"error" field, freeing per-line allocations and returning immediately on the
first found error; remove the readResponseBody allocation/pull_body usage and
ensure stream.close/defer semantics remain intact so large pulls don't allocate
the whole body.

dragosv and others added 3 commits March 5, 2026 19:15
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@dragosv dragosv merged commit ef08c78 into main Mar 7, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants