Skip to content

0x2kNJ/council

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Council

A Claude skill that assembles a panel of real, named experts to pressure-test your decisions.

You describe a high-stakes decision. Council interviews you, selects 3-4 experts whose worldviews clash on your specific problem, simulates each perspective with a killer question, and gives you a directional lean — not a wishy-washy "it depends."

Expert perspectives are AI-simulated based on public statements and known frameworks. They are not endorsements or actual advice from the named individuals.

Companion to CPO. CPO structures the decision (what to build, kill criteria, journal). Council pressure-tests it (who disagrees, why, and what you're not seeing). Use them together or separately.


Install

git clone https://github.com/0x2kNJ/council.git ~/.claude/skills/council

Or download council.skill from releases and install in Claude Code or Cowork.


What it does

help me think through whether we should take this acquisition offer —
$800k agency, 2x revenue, my team doesn't know yet

Turn 1 — Council gives you a provisional one-line read, then asks two diagnostic questions. It asks before it advises, because the right question usually matters more than the right answer.

Turn 2 — The panel. Three experts with clashing perspectives, each grounded in your specific situation. A directional lean. A confidence delta that tells you whether the analysis moved your prior or confirmed it.

Reversibility: One-way door — founder identity and team autonomy
               can't be reacquired after integration.
Watch for:     You've ruled out negotiating. Acquirers expect pushback.

Elad Gil (financial strategy):     2x revenue is mid-range for agencies.
  The structure matters more than the headline...
  → "What's the deal structure — lump sum, earnout, equity?"

Paul Graham (advocate — contrarian): You're using fatigue as the frame,
  but those aren't strategic reasons to sell...
  → "What's the thing you're not saying out loud?"

Patty McCord (premise challenger — people/transparency): Your team doesn't
  know yet. That's the hinge...
  → "What would you do if you weren't afraid of the conversation?"

Lean: Negotiate harder ($2.2M+) and talk to your team before accepting.
Confidence delta: You came in leaning toward acceptance. The panel says
  negotiate first, inform second, then decide.

Want me to go deeper? I can run cross-examination between Paul and Elad,
do a pre-mortem, or give you the full deep analysis.

Two modes

Standard mode (default) is fast — 3-4 expert takes, a lean, a confidence delta, and a menu to go deeper. Designed to be useful in under 60 seconds of reading.

Deep mode is the full structured analysis. Three ways to get it:

  1. Say "go deep" or "full version" in your ask
  2. Council detects a one-way door and offers to escalate
  3. After standard mode, ask for any deeper element

Deep mode adds: historical analogies per expert, what would change each expert's mind, cross-examination between the two most opposed voices, a silence audit (what the entire panel took for granted), and a full synthesis block with next actions.

Pre-mortem mode

Say "what kills this" or "pre-mortem" — each expert assumes the decision was made and failed 18 months later. They trace back from failure with specific warning signs you can check before committing.

Red team mode

Say "red team this" or "what would a competitor do" — each expert switches sides and advises a well-resourced adversary on how to defeat your decision within 12 months. Realistic attack vectors, real-world precedents, and the defensive moves you can make before committing.

Bias detection

Council names cognitive biases when it sees them — anchoring, sunk cost, survivorship, status quo, confirmation, loss aversion. Inline, one sentence, no lectures. In standard mode it appears in the "Watch for" line; in deep mode it gets a dedicated section in the framing audit.


The expert roster

Council picks dynamically from ~37 named experts across eleven domains — it selects the 3-4 whose worldviews create the most productive tension for your specific decision.

Domain Experts
Startup / product strategy Garry Tan, Paul Graham, Julie Zhuo, Andrew Chen
Engineering / architecture Boris Cherny, Thariq Shihipar, Kelsey Hightower, Charity Majors, Dan Abramov
Product / design Mike Krieger, Rasmus Andersson, Shreyas Doshi, Rahul Vohra
Systems / operations Rahul Patil, Will Larson
Strategy / investing Elad Gil, Patrick Collison, Tyler Cowen
GTM / sales / revenue Jason Lemkin, David Sacks, Elena Verna, Laela Sturdy
Marketing / positioning April Dunford, Emily Kramer, Andy Raskin
Finance / CFO lens Tomasz Tunguz, Bill Gurley
Legal / regulatory Heather Meeker, Alex Macgillivray
AI / ML strategy Andrej Karpathy, Sarah Guo, Simon Willison
Security Alex Stamos
People / org / culture Patty McCord, Liz Wiseman, Patrick Lencioni, Claire Hughes Johnson

Every panel includes at least one premise challenger (questions whether you're solving the right problem) and one advocate (steel-mans the option you're most reluctant to take). Both are labeled in the output so you know which perspective is a deliberate stress-test.

Gap detection + on-the-fly experts

Council actively checks whether the roster is sufficient for your decision. If it detects a domain gap — or the decision needs deeper specialization than the roster provides — it'll stop and tell you before proceeding, then walk you through adding a custom expert:

This decision hinges on clinical workflow adoption, and I don't have
an expert who thinks natively in that space. Who would you trust most
on this? I need their worldview, what they optimize for, and their
killer question.

You can also add experts yourself anytime:

[add: Sarah Chen — CFO. Thinks in unit economics and runway. Asks: "does this show up in the P&L in a good way?"]

Custom experts get the same treatment as roster experts: full perspectives, role labels, cross-examination eligibility.


Why not just ask Claude directly?

We benchmarked across three decision scenarios. Structural quality: 90% vs 13% on 33 assertions.

Capability Council Raw Claude
Interview before advising Yes No
Named experts with clashing worldviews Yes No
Reversibility analysis Yes No
Premise challenger + advocate roles Yes No
Confidence delta Yes No
Silence audit Yes No
Directional lean Yes Usually
Situation-specific detail Yes Yes

The skill's value isn't smarter advice — Claude is already smart. It's the structure of the thinking: interview before advising, clashing worldviews instead of a single perspective, reversibility gates that slow you down on irreversible choices, and a confidence delta that forces honesty about whether the analysis actually moved you.


Using with CPO

Council works standalone. If you also use CPO, they're complementary:

  • CPO = "what to build and whether to build it" — structured product decisions with kill criteria and a persistent journal
  • Council = "stress-test this with people who'd disagree" — expert panel simulation with clashing worldviews

After CPO gives you a verdict, use Council to pressure-test it. After Council gives you a lean, use CPO to structure the execution with kill criteria.


Design decisions

Why named experts? Frameworks are the engine (reversibility gates, framing audits, silence audits). The named experts make the output memorable and debatable — "I think Kelsey's wrong here because..." is a more productive reaction than "I disagree with point 3."

Why interview first? Most people who say "help me decide" need someone to ask the right question. The interview surfaces hidden preferences, unstated constraints, and whether the problem is even framed correctly.

Why the confidence delta? Advisory conversations that don't name whether they moved your prior are theater.


Roadmap

  • Expert persistence — Custom experts added via [add:] disappear between sessions. Future versions should let users save domain experts so repeat users (e.g., a healthcare founder) don't re-add their domain expert every time.
  • Decision journal — Track decisions across sessions. "You decided X six weeks ago. Here's what the panel said. How did it play out?"
  • Red team mode — Separate from pre-mortem. Assumes an adversary is actively trying to beat you, not just that things went wrong.

Contributing

Open issues for decisions where Council performed poorly. The most useful feedback: "the panel missed X" or "expert Y's take felt generic." Include the prompt and the output.


Eval

# Run the eval suite
cat evals/evals.json

Three canonical scenarios (architecture, acquisition, fundraising) with 33 total assertions. See tests/expected-outputs.md for benchmark results and analysis notes.


Council is not a prompt. It's a structured decision stress-test: interview before advising, clashing worldviews, mechanical checks (reversibility, framing gaps, silence audit), and a confidence delta that forces honesty about whether the analysis actually moved you.

About

Council

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors