Problem
When an agent session injects credentials as environment variables (e.g., GH_TOKEN, AWS_ACCESS_KEY_ID) into a child process, the agent has unrestricted access to the raw token values. Tool-level policy enforcement (e.g., blocking specific tool calls) can be trivially bypassed because:
- The agent can read env vars directly (
echo $GH_TOKEN, printenv, env)
- CLIs auto-consume credential env vars (
gh, git, aws all read tokens from env automatically)
- The agent can make arbitrary API calls via Bash (
curl -H "Authorization: Bearer $TOKEN" ...)
- Obfuscation is trivial (base64, Python one-liners, variable indirection, writing a script file then executing it)
The fundamental tension: env var injection is the simplest, most universal credential delivery mechanism — but it gives the agent process the same credential access as the user, making tool-level controls advisory rather than enforceable.
Impact
- Cedar/OPA-style policy on tool calls can catch accidental or unsophisticated misuse, but a capable agent (or a prompt-injected agent) can bypass them trivially
- Short-lived tokens and narrow scoping reduce blast radius but don't prevent misuse during the session
- Audit trails detect misuse after the fact but don't prevent it
Possible Solutions to Evaluate
1. Transparent credential proxy (strongest isolation)
Instead of injecting raw tokens, inject a proxy address:
HTTP_PROXY=http://localhost:<session-port>
HTTPS_PROXY=http://localhost:<session-port>
The sidecar intercepts outbound HTTP requests, injects auth headers for configured resources, and enforces policy at the network level. The agent never sees the raw token — it just makes unauthenticated requests that the proxy transparently upgrades.
Pros: Agent cannot exfiltrate tokens; policy enforcement at network level is robust
Cons: Complex to implement; breaks non-HTTP protocols; TLS interception requires CA trust; some CLIs don't respect proxy env vars
2. Credential-on-demand via sidecar (medium isolation)
Don't inject credentials as env vars at startup. Instead, the agent must request credentials through a local sidecar API (Unix socket), which:
- Returns short-lived, single-use tokens
- Logs every credential request
- Can deny requests based on policy
- Tokens have per-request TTLs (seconds, not session-length)
Pros: No long-lived tokens in env; every access is logged and policy-gated
Cons: Agent still receives the raw token in the response; requires CLI/tool integration
3. Process-level sandboxing (strongest but most restrictive)
Run the agent in a sandboxed environment (container, macOS sandbox profile, seccomp, network namespace) with:
- Outbound network restricted to an allowlist or routed through the credential proxy
- Filesystem access restricted to the project directory
- No access to credential env vars; credentials only flow through the proxy
Pros: Strongest isolation — even a fully compromised agent can't exfiltrate credentials
Cons: Severely limits agent capability; complex setup; platform-specific
4. Hybrid: policy enforcement + credential proxy for sensitive resources
- Low-sensitivity credentials (e.g., read-only GitHub token): inject via env vars, rely on Cedar policy + audit
- High-sensitivity credentials (e.g., write access, production DB): route through transparent proxy only
Pros: Pragmatic balance of security and usability
Cons: More complex architecture; requires classifying credential sensitivity
Questions to Answer
References
- AWS Vault proxy mode: credential isolation via local HTTP proxy
- Docker credential helpers: credential-on-demand pattern
- macOS
sandbox-exec: process-level network/filesystem restrictions
- Linux seccomp-bpf: syscall filtering for sandboxed processes
Problem
When an agent session injects credentials as environment variables (e.g.,
GH_TOKEN,AWS_ACCESS_KEY_ID) into a child process, the agent has unrestricted access to the raw token values. Tool-level policy enforcement (e.g., blocking specific tool calls) can be trivially bypassed because:echo $GH_TOKEN,printenv,env)gh,git,awsall read tokens from env automatically)curl -H "Authorization: Bearer $TOKEN" ...)The fundamental tension: env var injection is the simplest, most universal credential delivery mechanism — but it gives the agent process the same credential access as the user, making tool-level controls advisory rather than enforceable.
Impact
Possible Solutions to Evaluate
1. Transparent credential proxy (strongest isolation)
Instead of injecting raw tokens, inject a proxy address:
The sidecar intercepts outbound HTTP requests, injects auth headers for configured resources, and enforces policy at the network level. The agent never sees the raw token — it just makes unauthenticated requests that the proxy transparently upgrades.
Pros: Agent cannot exfiltrate tokens; policy enforcement at network level is robust
Cons: Complex to implement; breaks non-HTTP protocols; TLS interception requires CA trust; some CLIs don't respect proxy env vars
2. Credential-on-demand via sidecar (medium isolation)
Don't inject credentials as env vars at startup. Instead, the agent must request credentials through a local sidecar API (Unix socket), which:
Pros: No long-lived tokens in env; every access is logged and policy-gated
Cons: Agent still receives the raw token in the response; requires CLI/tool integration
3. Process-level sandboxing (strongest but most restrictive)
Run the agent in a sandboxed environment (container, macOS sandbox profile, seccomp, network namespace) with:
Pros: Strongest isolation — even a fully compromised agent can't exfiltrate credentials
Cons: Severely limits agent capability; complex setup; platform-specific
4. Hybrid: policy enforcement + credential proxy for sensitive resources
Pros: Pragmatic balance of security and usability
Cons: More complex architecture; requires classifying credential sensitivity
Questions to Answer
References
sandbox-exec: process-level network/filesystem restrictions