You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Problem: VERIFY relies on the LLM judging its own work
PAI's Algorithm is one of the most well-structured agentic execution frameworks I've seen — ISC criteria, anti-criteria, the Builder-Validator Pair Pattern, and the 7-phase loop all converge on the right architectural shape.
But there's a structural weakness in VERIFY: the system that generated the output is also the system (or a sibling LLM) that evaluates whether the output meets ISC criteria. A probabilistic check on a probabilistic output. The Builder-Validator Pair improves this, but the Validator is still an LLM — it can hallucinate a passing verification just as easily as the Builder can hallucinate a correct solution.
This is the same gap that exists in every agentic system today: there's no formal mechanism that makes ISC violations structurally impossible to commit.
The Proposal: Constraint-Projected Verification
We've been building a system called CPSC-RE (Constraint-Projected State Computing — Reasoning Engine) at BitConcepts that directly addresses this (https://github.com/cpsc-computing/). The core idea:
ISC criteria and anti-criteria are expressed as machine-executable constraints. A deterministic projection engine evaluates them — not an LLM.
The execution model:
OBSERVE → External variables populated from observed state
THINK → ISC formulated (human-readable, as today)
PLAN → ISC automatically translated to formal constraints (CAS-YAML)
Constraint graph validated — conflicts detected before BUILD
BUILD → Agent proposes solution (untrusted)
EXECUTE → Proposal captured as candidate state
VERIFY → Deterministic constraint PROJECTION (not LLM self-assessment)
→ Accept: all constraints satisfied, commit + transcript
→ Reject: structured feedback — exactly which criteria failed and why
LEARN → Machine-verifiable transcripts, not LLM-reported confidence
The 7-phase loop stays intact. No phase is replaced. VERIFY is hardened from probabilistic assessment to deterministic enforcement.
Why This Fits PAI's Existing Architecture
The patterns already in TheAlgorithm map directly:
ISC criteria → hard/soft constraints in a declarative spec (CAS-YAML)
Anti-criteria → hard constraints (negated) — e.g., "No credentials in git history" becomes a machine-enforceable invariant
ISC Quality Gate → compile-time constraint validation (detect bad criteria before execution)
Builder-Validator Pair → Builder (LLM) + Validator (projection engine) — the validator is now non-probabilistic
Confidence Tags [E]/[I]/[R] → formal variable role classification (External/Free/Derived)
PRD persistent memory → deterministic transcripts with replayable audit trails
The Hook System (17 hooks, 7 lifecycle events) also maps naturally. Hooks are currently procedural checks that run independently. Constraint projection evaluates all hooks jointly — detecting conflicts between hooks and guaranteeing that if projection succeeds, all security/access/deletion invariants hold simultaneously.
What ISC → Constraint Translation Looks Like
ISC today (natural language):
"All authentication tests pass after fix applied"
"No credentials exposed in git commit history"
Formal constraint equivalent (CAS-YAML):
constraints:
- id: auth_tests_passtype: hardexpression: "auth_test_results == 'all_pass'"description: "ISC: All authentication tests pass after fix applied"
- id: no_cred_exposuretype: hardexpression: "git_secrets_scan_result == 'clean'"description: "Anti-criteria: No credentials exposed in git commit history"
This translation can happen automatically in PLAN phase — the LLM generates ISC (it's good at that), then a translation step formalizes them into machine-enforceable constraints. Human-readable criteria stay as-is for the user. Machine-executable constraints run in the projection engine.
What This Gives PAI
VERIFY becomes deterministic. Same ISC + same proposed state = same accept/reject. No LLM judgment variance.
Anti-criteria become structurally enforceable. "No credentials in git" isn't a hope — it's a projection that blocks commit if violated.
Hooks evaluate jointly. Security + access + deletion constraints checked simultaneously. No bypass from missed lifecycle events.
Transcripts replace sentiment. LEARN gets machine-verifiable evidence of what was checked and what passed/failed — not LLM self-reported confidence.
Euphoric Surprise is preserved. Constraint projection enforces a correctness floor, not a quality ceiling. The Algorithm still pushes for delight above the floor.
About the Technology
CPSC-RE is part of Constraint-Projected State Computing (CPSC), a computing paradigm where correctness is enforced by projecting proposed states into constraint-defined spaces. It's covered by a U.S. provisional patent (BitConcepts, filed Feb 2026). The specific embodiments relevant here are CPSC-Governed Agentic Development (CGAD) — which treats agents as untrusted proposal generators whose outputs must satisfy declared constraints before acceptance.
The specification format (CAS-YAML) is declarative, composable, and version-controlled. The same constraints run on a software engine (Rust) or hardware fabric (VHDL/FPGA) without modification.
We think PAI has independently arrived at the same architectural insight that CGAD formalizes: state-based criteria, binary testability, builder-validator separation, anti-criteria as invariants. The gap is the enforcement mechanism — and that's what CPSC-RE provides.
We'd love to explore integration or collaboration. The fit between PAI's Algorithm and CPSC-RE's constraint projection is natural — PAI already has the right shape, it just needs a non-probabilistic backbone for VERIFY.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
The Problem: VERIFY relies on the LLM judging its own work
PAI's Algorithm is one of the most well-structured agentic execution frameworks I've seen — ISC criteria, anti-criteria, the Builder-Validator Pair Pattern, and the 7-phase loop all converge on the right architectural shape.
But there's a structural weakness in VERIFY: the system that generated the output is also the system (or a sibling LLM) that evaluates whether the output meets ISC criteria. A probabilistic check on a probabilistic output. The Builder-Validator Pair improves this, but the Validator is still an LLM — it can hallucinate a passing verification just as easily as the Builder can hallucinate a correct solution.
This is the same gap that exists in every agentic system today: there's no formal mechanism that makes ISC violations structurally impossible to commit.
The Proposal: Constraint-Projected Verification
We've been building a system called CPSC-RE (Constraint-Projected State Computing — Reasoning Engine) at BitConcepts that directly addresses this (https://github.com/cpsc-computing/). The core idea:
ISC criteria and anti-criteria are expressed as machine-executable constraints. A deterministic projection engine evaluates them — not an LLM.
The execution model:
The 7-phase loop stays intact. No phase is replaced. VERIFY is hardened from probabilistic assessment to deterministic enforcement.
Why This Fits PAI's Existing Architecture
The patterns already in TheAlgorithm map directly:
The Hook System (17 hooks, 7 lifecycle events) also maps naturally. Hooks are currently procedural checks that run independently. Constraint projection evaluates all hooks jointly — detecting conflicts between hooks and guaranteeing that if projection succeeds, all security/access/deletion invariants hold simultaneously.
What ISC → Constraint Translation Looks Like
ISC today (natural language):
Formal constraint equivalent (CAS-YAML):
This translation can happen automatically in PLAN phase — the LLM generates ISC (it's good at that), then a translation step formalizes them into machine-enforceable constraints. Human-readable criteria stay as-is for the user. Machine-executable constraints run in the projection engine.
What This Gives PAI
About the Technology
CPSC-RE is part of Constraint-Projected State Computing (CPSC), a computing paradigm where correctness is enforced by projecting proposed states into constraint-defined spaces. It's covered by a U.S. provisional patent (BitConcepts, filed Feb 2026). The specific embodiments relevant here are CPSC-Governed Agentic Development (CGAD) — which treats agents as untrusted proposal generators whose outputs must satisfy declared constraints before acceptance.
The specification format (CAS-YAML) is declarative, composable, and version-controlled. The same constraints run on a software engine (Rust) or hardware fabric (VHDL/FPGA) without modification.
We think PAI has independently arrived at the same architectural insight that CGAD formalizes: state-based criteria, binary testability, builder-validator separation, anti-criteria as invariants. The gap is the enforcement mechanism — and that's what CPSC-RE provides.
We'd love to explore integration or collaboration. The fit between PAI's Algorithm and CPSC-RE's constraint projection is natural — PAI already has the right shape, it just needs a non-probabilistic backbone for VERIFY.
— Tristen @ BitConcepts
Beta Was this translation helpful? Give feedback.
All reactions