Paste a prompt containing JSON, auto-detect embedded JSON blocks, convert them to TOON format, and see token count savings with cost comparisons across LLM models.
- Auto-detects JSON in your prompts — both fenced
```jsonblocks and bare JSON objects/arrays - Converts to TOON — a compact, token-efficient encoding of the JSON data model
- Counts tokens accurately using tiktoken (GPT-4o tokenizer)
- Compares costs across 11 models from OpenAI, Anthropic, and Google
- Live updates as you type with debounced processing
TOON (Token-Oriented Object Notation) encodes uniform arrays of objects as compact tables — field names declared once in a header instead of repeated per row. This is ideal for structured data in LLM prompts.
[
{"id": 1, "name": "Alice", "role": "admin"},
{"id": 2, "name": "Bob", "role": "user"}
]Becomes:
[2]{id,name,role}:
1,Alice,admin
2,Bob,user
Typical savings: 40–60% fewer tokens on prompts with tabular data.
npm install
npm run devOpen http://localhost:5173 and paste a prompt containing JSON.
- Vite + TypeScript
- @toon-format/toon for JSON-to-TOON conversion
- js-tiktoken for token counting
- Tailwind CSS for styling
MIT
