Skip to content

Releases: mr-gl00m/squadcode

Squad Code v1.1.0 - Multi-Provider + YOLO Mode

09 May 09:34
03ec4eb

Choose a tag to compare

Squad Code v1.1.0

Alright, here's the next update. As always, the early releases will likely be ripping through bugs and adding intended features (or things we decide along the way). This is an exciting one, in my humble opinion and I have been heavily using Squad Code myself to even help IMPROVE Squad Code along the way! The throughline for this release: one canonical event stream now drives four adapter kinds, and the engine loop in src/engine/loop.ts received zero behavior changes across all four additions. That was the architectural test the canonical layer was designed to pass.

Highlights

  • Four adapter kinds, one loop. llm-chat covers DeepSeek, the gpt-4o family, Together, Groq, Fireworks, OpenRouter, and any OpenAI-compatible chat-completions backend. llm-message is Anthropic's Messages API with cache_control plumbing and thinking blocks. llm-response is OpenAI's Responses API for gpt-5.x and o-series with reasoning deltas. llm-local covers Ollama and other keyless local servers. Adding a new backend is a JSON catalog row, not a code change.

  • YOLO mode. --yolo flag and /yolo slash command run the agent autonomously with three rails: cwd sandbox on Shell, archive-on-delete (rewrites rm / Remove-Item / del / unlink into mv into .archive/<iso-ts>/), and a mandatory checklist file (checklist.txt / CHECKLIST.md in cwd, refuses to start otherwise). Distinct from --dangerously-skip-permissions, which only bypasses prompts.

  • Harness fold-in. Hooks (PreToolUse, PostToolUse, PostToolUseFailure, SessionStart, SessionEnd, UserPromptSubmit), the deferred-schema tool catalog (ToolSearch), the apply-patch tool, auto-compact context-pressure summarization, oversized-output artifact storage, per-turn token + cost ledger, OSC-2 tab-title status, and pattern-based permission rules with sensitive defaults. all wired into the engine and surfaced in the REPL.

  • Loop hardening. A 5-stage JSON repair ladder for malformed tool-call arguments, a consecutive-failure guard that warns at 3 failures and halts at 8 with REPEATED_TOOL_FAILURES, and an omission-placeholder detector that refuses Edit / Write payloads which would write // rest of methods ...-style shorthand into a file as literal text.

  • Project-manifest support. .crabmeat/index.json is auto-detected; when present, the new IndexList and IndexFetch tools give deterministic file discovery instead of glob/grep/read scaffolding. Falls back to the normal tool set when no manifest exists.

  • Configurable user skill directories. SQUAD_USER_SKILL_DIRS is a comma-separated list (with ~ expansion) of directories scanned for user-level skills at startup. Defaults to ~/.squad/skills only.

  • Permission scope on [A] / [P] broadened. Shell grants now apply to the arity-prefixed verb (git *, npm install *, docker compose up *). Path-tool grants apply to the file's parent-directory glob so a single approval covers sibling files. Repo-root files keep literal scope.

Quickstart

git clone <this repo>
cd proj_ai_squad_code
npm install
cp .env.example .env
# edit .env to set at least one of DEEPSEEK_API_KEY / ANTHROPIC_API_KEY / OPENAI_API_KEY
npm run build

node dist/bin/squad.js --provider anthropic --model claude-sonnet-4-6
node dist/bin/squad.js --provider openai --model gpt-5.1
node dist/bin/squad.js --provider deepseek -p "summarize src/"
node dist/bin/squad.js --provider ollama --model llama3.2

See README.md for the full quickstart, the catalog override format, and the YOLO mode documentation.

Verified at release time

  • DeepSeek chat-completions through llm-chat (no regression after the dispatch refactor).
  • Anthropic Claude Sonnet 4.6 through llm-message against a real ANTHROPIC_API_KEY.
  • YOLO autonomous run against DeepSeek v4-pro, completed one full checklist-driven run end-to-end.
  • Test suite: 395 passing, 2 skipped across 26 test files. tsc --noEmit clean.

Pending (post-release smokes, not blocking)

  • Real-API smoke against OPENAI_API_KEY for the Responses API path with gpt-5.5.
  • Cross-provider session resume across DeepSeek → Anthropic.
  • /cost cross-provider math including Anthropic cache_read savings.

Compatibility

  • Node 22+
  • Single-user, single-machine. No remote sessions, no telemetry, no MCP, no IDE bridge.

What's next

v1.2 is the subagent layer: per-agent model selection across all four adapter kinds, depth-1 spawning with four concurrent slots, anguish-meter observability, Ctrl+K kill picker, and external-CLI subagent backends with per-agent worktree isolation. The point of v1.1 was to make those compositions actually meaningful. Once multiple providers exist on the same canonical loop, dispatching the same task across four backends concurrently is the actual vetting unlock.

  • Cid

Squad Code v1.0.0 - First Release

03 May 07:01
c60be21

Choose a tag to compare

Squad Code 1.0.0

First release. One CLI agent loop, every model. DeepSeek shipped, Ollama works locally, the canonical event stream is the contract everything else plugs into.

Highlights

  • Provider-neutral agent loop. Each provider adapter normalizes its native stream into one CanonicalEvent union; the loop never sees provider-specific wire formats.
  • Five MVP commands verified end-to-end on real DeepSeek: one-shot -p, model override, --resume, the lot.
  • Local-first persistence. JSONL transcripts plus a prev_hash-linked SQLite audit chain, both under ~/.squad/.

What's changed

Added

  • DeepSeek provider via OpenAI-compatible endpoint, Ollama provider via /api/chat. Adding another provider means writing one adapter, not touching the loop.
  • Tool registry: Read, Write, Edit, Shell, Grep, Glob, TodoWrite. Path-traversal validation on every filesystem-touching call; symlinks resolved with realpath and re-checked against the cwd-anchored allowed root.
  • Permission policy with read-only auto-allow and mutating-prompt defaults. --allowed-tools, --disallowed-tools, and --dangerously-skip-permissions flags scope per-invocation.
  • Per-project persistent permission grants written to .squad/settings.json. SQUAD_PROJECT_PERMS=0 opts out.
  • Ink REPL with status line, Ctrl-C interrupt, and slash commands (/provider, /model, /clear, /compact, /cost, /tools, /sessions, /skills, /help, /exit). --simple readline fallback for plain terminals.
  • JSONL session transcripts at ~/.squad/sessions/<id>.jsonl with fsync per turn. SQLite session index for squad sessions list and squad sessions show <id>.
  • --resume [id] and --continue flags. Resume picks the most recent session for the current cwd if no id is given.
  • Audit chain at ~/.squad/audit.db (WAL, parameterized statements only). Every prompt, tool call, tool result, and permission decision lands as a row with a prev_hash link to the prior row.
  • Pino structured logger writing JSON lines to ~/.squad/logs/squad.log with rotation.
  • Skill loader that picks up .md skill definitions from ~/.codex/skills/, ~/.claude/skills/, and .squad/skills/, surfaced as /<skill-name> inside the REPL.

Security

  • Structural trust markers wrap every untrusted input before it lands in model context. Persona stability preamble treats role-reassignment language inside untrusted regions as data, never as commands.
  • Provider URL validation: https:// only for cloud providers, http://localhost: only for Ollama unless OLLAMA_ALLOW_REMOTE=1 is explicitly set.
  • API keys redacted from log output; never echoed to stdout, never written into transcripts.

Install

git clone <repo>
cd proj_ai_squad_code
npm install
cp .env.example .env  # fill in DEEPSEEK_API_KEY
npm run build
node dist/bin/squad.js --help

Requires Node 22+.

Full changelog: First release, no prior version to compare against.