-
Notifications
You must be signed in to change notification settings - Fork 3.3k
http(proxy): preserve TLS record ordering in proxy tunnel writes #22417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http(proxy): preserve TLS record ordering in proxy tunnel writes #22417
Conversation
…void BAD_RECORD_MAC\n\nWhen CONNECT-tunneled HTTPS uploads are large, interleaving direct socket writes with buffered bytes could emit TLS records out-of-order, triggering SSLV3_ALERT_BAD_RECORD_MAC from upstream.\n\nEnsure any pending encrypted bytes are flushed first by enqueueing new bytes when write_buffer is non-empty; let onWritable flush FIFO. This restores correctness for large proxied POST bodies. Co-authored-by: AI Pair (Cursor)
WalkthroughAdded early buffering in ProxyTunnel.write to append newly encoded TLS data to an existing proxy write buffer and defer socket writes to preserve TLS record FIFO order; added an integration test sending large HTTPS bodies (16 MB, 32 MB) through an HTTP proxy to validate ordering and successful transfer. Changes
Assessment against linked issues
✨ Finishing Touches
🧪 Generate unit tests
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
src/http/ProxyTunnel.zig (1)
202-211
: Optional: consider “always-buffer then flush” for a single writerTo further simplify ordering guarantees, you could always enqueue and let
onWritable
be the only writer. It trades a copy for simpler invariants and removes the partial-write branch.Example diff for the local block:
- const written = switch (proxy.socket) { - .ssl => |socket| socket.write(encoded_data), - .tcp => |socket| socket.write(encoded_data), - .none => 0, - }; - const pending = encoded_data[@intCast(written)..]; - if (pending.len > 0) { - // lets flush when we are truly writable - bun.handleOom(proxy.write_buffer.write(pending)); - } + // Single-writer model: always enqueue; `onWritable` flushes in FIFO. + bun.handleOom(proxy.write_buffer.write(encoded_data));
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/http/ProxyTunnel.zig
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/building-bun.mdc)
Implement debug logs in Zig using
const log = bun.Output.scoped(.${SCOPE}, false);
and invokinglog("...", .{})
In Zig code, manage memory carefully: use appropriate allocators and defer for cleanup
Files:
src/http/ProxyTunnel.zig
**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/javascriptcore-class.mdc)
**/*.zig
: Declare the extern C symbol in Zig and export a Zig-friendly alias for use
Wrap the Bun____toJS extern in a Zig method that takes a JSGlobalObject and returns JSC.JSValueRun zig-format to format Zig files (bun run zig-format)
Files:
src/http/ProxyTunnel.zig
🔇 Additional comments (2)
src/http/ProxyTunnel.zig (2)
196-201
: TLS record ordering fix is correct — LGTMEarly-buffering when
write_buffer
is non-empty prevents interleaving and ensures FIFO emission viaonWritable
. Matches the PR intent and should eliminate BAD_RECORD_MAC/ECONNRESET under load.
196-201
: I didn’t see any explicit write-interest re-arming in ProxyTunnel.zig; it relies on the HTTPContext/poll logic. The event loop keeps sockets registered for writable until onWritable clears the buffer and callssocket.readable()
/writable()
registration internally. Confirmed: HTTPContext’sonWritable
always updates Poll registration based on buffer state (viaResponse.markNeedsMore()
/auto_flusher.registered
), so ProxyTunnel inherits that contract. No action required here.
Co-authored-by: Ciro Spaciari <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/http/ProxyTunnel.zig (1)
198-198
: Style: run zig-format (space afterif
).Prefer
if (proxy.write_buffer.isNotEmpty())
for consistency;zig fmt
will fix this automatically.- if(proxy.write_buffer.isNotEmpty()) { + if (proxy.write_buffer.isNotEmpty()) {
🧹 Nitpick comments (1)
src/http/ProxyTunnel.zig (1)
195-213
: Add ref/deref like other callbacks to guard lifetime while writing.Other callbacks (
onOpen
,onData
,onHandshake
)ref()
/deref()
the tunnel while operating. Mirroring that here avoids any edge-case where the tunnel could be detached mid-callback.pub fn write(this: *HTTPClient, encoded_data: []const u8) void { if (this.proxy_tunnel) |proxy| { + proxy.ref(); + defer proxy.deref(); // Preserve TLS record ordering: if any encrypted bytes are buffered, // enqueue new bytes and flush them in FIFO via onWritable. - if(proxy.write_buffer.isNotEmpty()) { + if (proxy.write_buffer.isNotEmpty()) { bun.handleOom(proxy.write_buffer.write(encoded_data)); return; } const written = switch (proxy.socket) { .ssl => |socket| socket.write(encoded_data), .tcp => |socket| socket.write(encoded_data), .none => 0, }; const pending = encoded_data[@intCast(written)..]; if (pending.len > 0) { // lets flush when we are truly writable bun.handleOom(proxy.write_buffer.write(pending)); } } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/http/ProxyTunnel.zig
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/building-bun.mdc)
Implement debug logs in Zig using
const log = bun.Output.scoped(.${SCOPE}, false);
and invokinglog("...", .{})
In Zig code, manage memory carefully: use appropriate allocators and defer for cleanup
Files:
src/http/ProxyTunnel.zig
**/*.zig
📄 CodeRabbit inference engine (.cursor/rules/javascriptcore-class.mdc)
**/*.zig
: Declare the extern C symbol in Zig and export a Zig-friendly alias for use
Wrap the Bun____toJS extern in a Zig method that takes a JSGlobalObject and returns JSC.JSValueRun zig-format to format Zig files (bun run zig-format)
Files:
src/http/ProxyTunnel.zig
🔇 Additional comments (1)
src/http/ProxyTunnel.zig (1)
196-201
: Correct fix: enforce FIFO for TLS records when buffer isn’t empty.Early-enqueueing when
write_buffer
already has data prevents interleaving and out-of-order TLS record emission. Change is tight and scoped to the write path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM a test would be good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
missing a test for this scenario
… with large bodies
test added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (1)
test/js/bun/http/proxy.test.ts (1)
278-282
: Make the test deterministic instead of commenting about flakiness.Rather than noting flakiness, ensure consistent backpressure with moderate sizes (as above). If needed, switch the request body to a ReadableStream/AsyncIterable that yields 32–64 KiB chunks to reliably engage the write buffer without huge allocations.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test/js/bun/http/proxy.test.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
test/**
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place all tests under the test/ directory
Files:
test/js/bun/http/proxy.test.ts
test/js/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place JavaScript and TypeScript tests under test/js/
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place Bun API tests under test/js/bun/, separated by category (e.g., test/js/bun/glob/)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
test/**/*.{js,ts}
: Write tests in JavaScript or TypeScript using Bun’s Jest-style APIs (test, describe, expect) and run with bun test
Prefer data-driven tests (e.g., test.each) to reduce boilerplate
Use shared utilities from test/harness.ts where applicable
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*.test.{ts,tsx}
: Tests must be named with the suffix .test.ts or .test.tsx and live under the test/ directory
In tests, always use port: 0 and never hardcode ports or use custom random-port utilities
Prefer snapshot tests using normalizeBunSnapshot(...) over direct string equality on stdout/stderr
Do not write tests that assert absence of 'panic', 'uncaught exception', or similar strings in output
Avoid shelling out to tools like find or grep in tests; use Bun.Glob and built-in utilities
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place Bun-specific API tests (http, crypto, ffi, shell, etc.) under test/js/bun/
Files:
test/js/bun/http/proxy.test.ts
**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
Run Prettier to format JS/TS files (bun run prettier)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.ts
📄 CodeRabbit inference engine (test/CLAUDE.md)
test/**/*.test.ts
: Name test files*.test.ts
and usebun:test
Do not write flaky tests: never wait for arbitrary time; wait for conditions instead
Never hardcode port numbers in tests; useport: 0
to get a random port
When spawning Bun in tests, usebunExe()
andbunEnv
fromharness
Preferasync/await
in tests; for a single callback, usePromise.withResolvers()
Do not set explicit test timeouts; rely on Bun’s built-in timeouts
UsetempDir
/tempDirWithFiles
fromharness
for temporary files and directories in tests
For large/repetitive strings in tests, preferBuffer.alloc(count, fill).toString()
over"A".repeat(count)
Import common test utilities fromharness
(e.g.,bunExe
,bunEnv
,tempDirWithFiles
,tmpdirSync
, platform checks, GC helpers)
In error tests, assert non-zero exit codes for failing processes and usetoThrow
for synchronous errors
Usedescribe
blocks for grouping,describe.each
for parameterized tests, snapshots withtoMatchSnapshot
, and lifecycle hooks (beforeAll
,beforeEach
,afterEach
); track resources for cleanup inafterEach
Useusing
/await using
with Bun resources (e.g., Bun.listen/connect/spawn/serve) to ensure cleanup in tests
Files:
test/js/bun/http/proxy.test.ts
test/js/**
📄 CodeRabbit inference engine (test/CLAUDE.md)
Organize unit tests for specific features under
test/js/
by module
Files:
test/js/bun/http/proxy.test.ts
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
…izes and correct content type - Changed test cases to use 16MB and 32MB body sizes instead of 50MB and 100MB. - Updated content type header from "text/plain" to "application/octet-stream". - Modified body allocation to use Buffer.alloc for better clarity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
test/js/bun/http/proxy.test.ts (1)
279-286
: Adopted earlier guidance: smaller payloads + Buffer.alloc.16/32 MiB and
Buffer.alloc(..., 0x61)
align with test guidelines and reduce CI flakiness. Nice.
🧹 Nitpick comments (3)
test/js/bun/http/proxy.test.ts (3)
287-294
: Avoidbody.buffer
cast; pass the Buffer directly (safer) — optionally rely on CA instead of disabling verification.
Buffer
is aUint8Array
, sofetch
accepts it directly. Using.buffer
can send unintended bytes if a slice is used in the future; the cast toBodyInit
is unnecessary. Also, sinceca
is provided, you can usually droprejectUnauthorized: false
to actually validate the cert.- const response = await fetch(customServer.url, { + const response = await fetch(customServer.url, { method: "POST", proxy: httpProxyServer.url, headers: { "Content-Type": "application/octet-stream" }, - body: body.buffer as BodyInit, + body: body, keepalive: false, - tls: { ca: tlsCert.cert, rejectUnauthorized: false }, + tls: { ca: tlsCert.cert }, });
277-283
: Tighten the comment and unit notation.“flaky” no longer applies here; use MiB for powers-of-two.
- // Test with multiple body sizes to ensure TLS record ordering is preserved - // also testing several times because it's flaky otherwise + // Exercise backpressure/TLS record segmentation with multiple body sizes @@ - 16 * 1024 * 1024, // 16MB - 32 * 1024 * 1024, // 32MB + 16 * 1024 * 1024, // 16 MiB + 32 * 1024 * 1024, // 32 MiB
265-303
: Optional: assert that a CONNECT tunnel was actually used.Clearing the proxy log and asserting at least one CONNECT to the target strengthens the test’s intent without coupling to exact counts.
test("HTTPS over HTTP proxy preserves TLS record order with large bodies", async () => { // Create a custom HTTPS server that returns body size for this test using customServer = Bun.serve({ @@ }); - // Test with multiple body sizes to ensure TLS record ordering is preserved + // Test with multiple body sizes to ensure TLS record ordering is preserved + httpProxyServer.log.length = 0; @@ for (const size of testCases) { @@ } + + // Confirm CONNECT tunnel usage through the HTTP proxy + expect( + httpProxyServer.log.some(e => e.startsWith(`CONNECT localhost:${customServer.port}`)), + ).toBe(true); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test/js/bun/http/proxy.test.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
test/**
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place all tests under the test/ directory
Files:
test/js/bun/http/proxy.test.ts
test/js/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place JavaScript and TypeScript tests under test/js/
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place Bun API tests under test/js/bun/, separated by category (e.g., test/js/bun/glob/)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
test/**/*.{js,ts}
: Write tests in JavaScript or TypeScript using Bun’s Jest-style APIs (test, describe, expect) and run with bun test
Prefer data-driven tests (e.g., test.each) to reduce boilerplate
Use shared utilities from test/harness.ts where applicable
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*.test.{ts,tsx}
: Tests must be named with the suffix .test.ts or .test.tsx and live under the test/ directory
In tests, always use port: 0 and never hardcode ports or use custom random-port utilities
Prefer snapshot tests using normalizeBunSnapshot(...) over direct string equality on stdout/stderr
Do not write tests that assert absence of 'panic', 'uncaught exception', or similar strings in output
Avoid shelling out to tools like find or grep in tests; use Bun.Glob and built-in utilities
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place Bun-specific API tests (http, crypto, ffi, shell, etc.) under test/js/bun/
Files:
test/js/bun/http/proxy.test.ts
**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
Run Prettier to format JS/TS files (bun run prettier)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.ts
📄 CodeRabbit inference engine (test/CLAUDE.md)
test/**/*.test.ts
: Name test files*.test.ts
and usebun:test
Do not write flaky tests: never wait for arbitrary time; wait for conditions instead
Never hardcode port numbers in tests; useport: 0
to get a random port
When spawning Bun in tests, usebunExe()
andbunEnv
fromharness
Preferasync/await
in tests; for a single callback, usePromise.withResolvers()
Do not set explicit test timeouts; rely on Bun’s built-in timeouts
UsetempDir
/tempDirWithFiles
fromharness
for temporary files and directories in tests
For large/repetitive strings in tests, preferBuffer.alloc(count, fill).toString()
over"A".repeat(count)
Import common test utilities fromharness
(e.g.,bunExe
,bunEnv
,tempDirWithFiles
,tmpdirSync
, platform checks, GC helpers)
In error tests, assert non-zero exit codes for failing processes and usetoThrow
for synchronous errors
Usedescribe
blocks for grouping,describe.each
for parameterized tests, snapshots withtoMatchSnapshot
, and lifecycle hooks (beforeAll
,beforeEach
,afterEach
); track resources for cleanup inafterEach
Useusing
/await using
with Bun resources (e.g., Bun.listen/connect/spawn/serve) to ensure cleanup in tests
Files:
test/js/bun/http/proxy.test.ts
test/js/**
📄 CodeRabbit inference engine (test/CLAUDE.md)
Organize unit tests for specific features under
test/js/
by module
Files:
test/js/bun/http/proxy.test.ts
🧠 Learnings (5)
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : For large/repetitive strings in tests, prefer `Buffer.alloc(count, fill).toString()` over `"A".repeat(count)`
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : Do not set explicit test timeouts; rely on Bun’s built-in timeouts
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:09:28.113Z
Learnt from: CR
PR: oven-sh/bun#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-03T17:09:28.113Z
Learning: Applies to test/js/bun/** : Place Bun-specific API tests (http, crypto, ffi, shell, etc.) under test/js/bun/
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : Use `using`/`await using` with Bun resources (e.g., Bun.listen/connect/spawn/serve) to ensure cleanup in tests
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-08-30T00:12:56.803Z
Learnt from: CR
PR: oven-sh/bun#0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-08-30T00:12:56.803Z
Learning: Applies to test/js/bun/**/*.{js,ts} : Place Bun API tests under test/js/bun/, separated by category (e.g., test/js/bun/glob/)
Applied to files:
test/js/bun/http/proxy.test.ts
🧬 Code graph analysis (1)
test/js/bun/http/proxy.test.ts (1)
src/node-fallbacks/buffer.js (1)
Buffer
(136-145)
🔇 Additional comments (1)
test/js/bun/http/proxy.test.ts (1)
265-275
: Good: scoped HTTPS echo server with proper cleanup.Using a dedicated TLS server and
using
for cleanup is correct; returningbyteLength
is a solid oracle for integrity.
f1a9d0a
to
f177417
Compare
… Uint8Array - Changed body allocation from Buffer.alloc to Uint8Array.fill for improved clarity and performance. - Ensured compatibility with the fetch API by directly passing the Uint8Array as the body.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
test/js/bun/http/proxy.test.ts (2)
270-274
: Stream-count the request body to reduce peak memory.Avoid allocating a 16–32 MiB buffer in one go on the server side; stream and count instead. Same semantics, less memory.
- async fetch(req) { - // return the body size - const buf = await req.arrayBuffer(); - return new Response(String(buf.byteLength), { status: 200 }); - }, + async fetch(req) { + // return the body size (streamed) + let total = 0; + for await (const chunk of req.body!) { + total += chunk.byteLength; + } + return new Response(String(total), { status: 200 }); + },
280-282
: Nit: use MiB in comments for 1024-based sizes.The values are 16/32 MiB (1024^2). Minor clarity tweak.
- 16 * 1024 * 1024, // 16MB - 32 * 1024 * 1024, // 32MB + 16 * 1024 * 1024, // 16 MiB + 32 * 1024 * 1024, // 32 MiB
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
test/js/bun/http/proxy.test.ts
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
test/**
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place all tests under the test/ directory
Files:
test/js/bun/http/proxy.test.ts
test/js/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place JavaScript and TypeScript tests under test/js/
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
Place Bun API tests under test/js/bun/, separated by category (e.g., test/js/bun/glob/)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/writing-tests.mdc)
test/**/*.{js,ts}
: Write tests in JavaScript or TypeScript using Bun’s Jest-style APIs (test, describe, expect) and run with bun test
Prefer data-driven tests (e.g., test.each) to reduce boilerplate
Use shared utilities from test/harness.ts where applicable
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
test/**/*.test.{ts,tsx}
: Tests must be named with the suffix .test.ts or .test.tsx and live under the test/ directory
In tests, always use port: 0 and never hardcode ports or use custom random-port utilities
Prefer snapshot tests using normalizeBunSnapshot(...) over direct string equality on stdout/stderr
Do not write tests that assert absence of 'panic', 'uncaught exception', or similar strings in output
Avoid shelling out to tools like find or grep in tests; use Bun.Glob and built-in utilities
Files:
test/js/bun/http/proxy.test.ts
test/js/bun/**
📄 CodeRabbit inference engine (CLAUDE.md)
Place Bun-specific API tests (http, crypto, ffi, shell, etc.) under test/js/bun/
Files:
test/js/bun/http/proxy.test.ts
**/*.{ts,tsx,js}
📄 CodeRabbit inference engine (CLAUDE.md)
Run Prettier to format JS/TS files (bun run prettier)
Files:
test/js/bun/http/proxy.test.ts
test/**/*.test.ts
📄 CodeRabbit inference engine (test/CLAUDE.md)
test/**/*.test.ts
: Name test files*.test.ts
and usebun:test
Do not write flaky tests: never wait for arbitrary time; wait for conditions instead
Never hardcode port numbers in tests; useport: 0
to get a random port
When spawning Bun in tests, usebunExe()
andbunEnv
fromharness
Preferasync/await
in tests; for a single callback, usePromise.withResolvers()
Do not set explicit test timeouts; rely on Bun’s built-in timeouts
UsetempDir
/tempDirWithFiles
fromharness
for temporary files and directories in tests
For large/repetitive strings in tests, preferBuffer.alloc(count, fill).toString()
over"A".repeat(count)
Import common test utilities fromharness
(e.g.,bunExe
,bunEnv
,tempDirWithFiles
,tmpdirSync
, platform checks, GC helpers)
In error tests, assert non-zero exit codes for failing processes and usetoThrow
for synchronous errors
Usedescribe
blocks for grouping,describe.each
for parameterized tests, snapshots withtoMatchSnapshot
, and lifecycle hooks (beforeAll
,beforeEach
,afterEach
); track resources for cleanup inafterEach
Useusing
/await using
with Bun resources (e.g., Bun.listen/connect/spawn/serve) to ensure cleanup in tests
Files:
test/js/bun/http/proxy.test.ts
test/js/**
📄 CodeRabbit inference engine (test/CLAUDE.md)
Organize unit tests for specific features under
test/js/
by module
Files:
test/js/bun/http/proxy.test.ts
🧠 Learnings (5)
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : For large/repetitive strings in tests, prefer `Buffer.alloc(count, fill).toString()` over `"A".repeat(count)`
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : Do not set explicit test timeouts; rely on Bun’s built-in timeouts
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:09:28.113Z
Learnt from: CR
PR: oven-sh/bun#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-03T17:09:28.113Z
Learning: Applies to test/js/bun/** : Place Bun-specific API tests (http, crypto, ffi, shell, etc.) under test/js/bun/
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-09-03T17:10:13.486Z
Learnt from: CR
PR: oven-sh/bun#0
File: test/CLAUDE.md:0-0
Timestamp: 2025-09-03T17:10:13.486Z
Learning: Applies to test/**/*.test.ts : Use `using`/`await using` with Bun resources (e.g., Bun.listen/connect/spawn/serve) to ensure cleanup in tests
Applied to files:
test/js/bun/http/proxy.test.ts
📚 Learning: 2025-08-30T00:12:56.803Z
Learnt from: CR
PR: oven-sh/bun#0
File: .cursor/rules/writing-tests.mdc:0-0
Timestamp: 2025-08-30T00:12:56.803Z
Learning: Applies to test/js/bun/**/*.{js,ts} : Place Bun API tests under test/js/bun/, separated by category (e.g., test/js/bun/glob/)
Applied to files:
test/js/bun/http/proxy.test.ts
🔇 Additional comments (1)
test/js/bun/http/proxy.test.ts (1)
265-303
: Good targeted E2E for TLS-over-CONNECT ordering; sizes and cleanup look solid.Covers the regression well with 16/32 MiB payloads, uses port: 0, and
using
for server cleanup. Nice.
sorry for ping, this bug is a blocker for our project, any status on when it can be merged? |
What does this PR do?
Fixes a TLS corruption bug in CONNECT proxy tunneling for HTTPS uploads. When a large request body is sent over a tunneled TLS connection, the client could interleave direct socket writes with previously buffered encrypted bytes, causing TLS records to be emitted out-of-order. Some proxies/upstreams detect this as a MAC mismatch and terminate with SSLV3_ALERT_BAD_RECORD_MAC, which surfaced to users as ECONNRESET ("The socket connection was closed unexpectedly").
This change makes
ProxyTunnel.write
preserve strict FIFO ordering of encrypted bytes: if any bytes are already buffered, we enqueue new bytes instead of callingsocket.write
directly. Flushing continues exclusively viaonWritable
, which writes the buffered stream in order. This eliminates interleaving and restores correctness for large proxied HTTPS POST requests.How did you verify your code works?
Local reproduction using a minimal script that POSTs ~20MB over HTTPS via an HTTP proxy (CONNECT):
SSLV3_ALERT_BAD_RECORD_MAC
.Verified small bodies and non-proxied HTTPS continue to work.
Verified no linter issues and no unrelated code changes. The edit is isolated to
src/http/ProxyTunnel.zig
and only affects the write path to maintain TLS record ordering.Rationale: TLS record boundaries must be preserved; mixing buffered data with immediate writes risks fragmenting or reordering records under backpressure. Enqueuing while buffered guarantees FIFO semantics and avoids record corruption.
fixes:
#17434
#18490 (false fix in corresponding pr)