Squad Code v1.1.0
Alright, here's the next update. As always, the early releases will likely be ripping through bugs and adding intended features (or things we decide along the way). This is an exciting one, in my humble opinion and I have been heavily using Squad Code myself to even help IMPROVE Squad Code along the way! The throughline for this release: one canonical event stream now drives four adapter kinds, and the engine loop in src/engine/loop.ts received zero behavior changes across all four additions. That was the architectural test the canonical layer was designed to pass.
Highlights
-
Four adapter kinds, one loop.
llm-chatcovers DeepSeek, the gpt-4o family, Together, Groq, Fireworks, OpenRouter, and any OpenAI-compatible chat-completions backend.llm-messageis Anthropic's Messages API withcache_controlplumbing and thinking blocks.llm-responseis OpenAI's Responses API for gpt-5.x and o-series with reasoning deltas.llm-localcovers Ollama and other keyless local servers. Adding a new backend is a JSON catalog row, not a code change. -
YOLO mode.
--yoloflag and/yoloslash command run the agent autonomously with three rails: cwd sandbox onShell, archive-on-delete (rewritesrm/Remove-Item/del/unlinkintomvinto.archive/<iso-ts>/), and a mandatory checklist file (checklist.txt/CHECKLIST.mdin cwd, refuses to start otherwise). Distinct from--dangerously-skip-permissions, which only bypasses prompts. -
Harness fold-in. Hooks (
PreToolUse,PostToolUse,PostToolUseFailure,SessionStart,SessionEnd,UserPromptSubmit), the deferred-schema tool catalog (ToolSearch), theapply-patchtool, auto-compact context-pressure summarization, oversized-output artifact storage, per-turn token + cost ledger, OSC-2 tab-title status, and pattern-based permission rules with sensitive defaults. all wired into the engine and surfaced in the REPL. -
Loop hardening. A 5-stage JSON repair ladder for malformed tool-call arguments, a consecutive-failure guard that warns at 3 failures and halts at 8 with
REPEATED_TOOL_FAILURES, and an omission-placeholder detector that refuses Edit / Write payloads which would write// rest of methods ...-style shorthand into a file as literal text. -
Project-manifest support.
.crabmeat/index.jsonis auto-detected; when present, the newIndexListandIndexFetchtools give deterministic file discovery instead of glob/grep/read scaffolding. Falls back to the normal tool set when no manifest exists. -
Configurable user skill directories.
SQUAD_USER_SKILL_DIRSis a comma-separated list (with~expansion) of directories scanned for user-level skills at startup. Defaults to~/.squad/skillsonly. -
Permission scope on
[A]/[P]broadened.Shellgrants now apply to the arity-prefixed verb (git *,npm install *,docker compose up *). Path-tool grants apply to the file's parent-directory glob so a single approval covers sibling files. Repo-root files keep literal scope.
Quickstart
git clone <this repo>
cd proj_ai_squad_code
npm install
cp .env.example .env
# edit .env to set at least one of DEEPSEEK_API_KEY / ANTHROPIC_API_KEY / OPENAI_API_KEY
npm run build
node dist/bin/squad.js --provider anthropic --model claude-sonnet-4-6
node dist/bin/squad.js --provider openai --model gpt-5.1
node dist/bin/squad.js --provider deepseek -p "summarize src/"
node dist/bin/squad.js --provider ollama --model llama3.2See README.md for the full quickstart, the catalog override format, and the YOLO mode documentation.
Verified at release time
- DeepSeek chat-completions through
llm-chat(no regression after the dispatch refactor). - Anthropic Claude Sonnet 4.6 through
llm-messageagainst a realANTHROPIC_API_KEY. - YOLO autonomous run against DeepSeek v4-pro, completed one full checklist-driven run end-to-end.
- Test suite: 395 passing, 2 skipped across 26 test files.
tsc --noEmitclean.
Pending (post-release smokes, not blocking)
- Real-API smoke against
OPENAI_API_KEYfor the Responses API path withgpt-5.5. - Cross-provider session resume across DeepSeek → Anthropic.
/costcross-provider math including Anthropiccache_readsavings.
Compatibility
- Node 22+
- Single-user, single-machine. No remote sessions, no telemetry, no MCP, no IDE bridge.
What's next
v1.2 is the subagent layer: per-agent model selection across all four adapter kinds, depth-1 spawning with four concurrent slots, anguish-meter observability, Ctrl+K kill picker, and external-CLI subagent backends with per-agent worktree isolation. The point of v1.1 was to make those compositions actually meaningful. Once multiple providers exist on the same canonical loop, dispatching the same task across four backends concurrently is the actual vetting unlock.
- Cid