Conversation
Enables the agent to spin up multi-GPU pods when experiments need more VRAM. The agent reads program.md which explains how to use it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2b493b2608
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| FROM ${BASE_IMAGE} | ||
|
|
||
| # Install runpodctl for pod management (scaling up GPUs) | ||
| RUN wget -qO- https://github.com/runpod/runpodctl/releases/latest/download/runpodctl-linux-amd64 > /usr/local/bin/runpodctl && \ |
There was a problem hiding this comment.
Pin runpodctl version in image build
Using the releases/latest download URL makes this image non-deterministic: the same Dockerfile can produce different runpodctl binaries on different days, and a new upstream release can silently break scaling workflows when CI or users rebuild the template. Pinning to a specific release asset (and ideally checksum-verifying it) avoids unexpected behavior regressions in production pods.
Useful? React with 👍 / 👎.
The release asset changed from a raw binary to a .tar.gz archive. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
runpodctlin the autoresearch Docker image so the agent can spin up multi-GPU pods when experiments need more VRAMprogram.mdRelated: DR-1298
Test plan
runpodctl --versionworks inside the containerrunpodctlis available at/usr/local/bin/runpodctl🤖 Generated with Claude Code