Skip to content

SeanLikesData/toon-prompt-optimizer

Repository files navigation

TOON Prompt Optimizer

Paste a prompt containing JSON, auto-detect embedded JSON blocks, convert them to TOON format, and see token count savings with cost comparisons across LLM models.

TOON Prompt Optimizer screenshot

What it does

  • Auto-detects JSON in your prompts — both fenced ```json blocks and bare JSON objects/arrays
  • Converts to TOON — a compact, token-efficient encoding of the JSON data model
  • Counts tokens accurately using tiktoken (GPT-4o tokenizer)
  • Compares costs across 11 models from OpenAI, Anthropic, and Google
  • Live updates as you type with debounced processing

Why TOON?

TOON (Token-Oriented Object Notation) encodes uniform arrays of objects as compact tables — field names declared once in a header instead of repeated per row. This is ideal for structured data in LLM prompts.

[
  {"id": 1, "name": "Alice", "role": "admin"},
  {"id": 2, "name": "Bob", "role": "user"}
]

Becomes:

[2]{id,name,role}:
  1,Alice,admin
  2,Bob,user

Typical savings: 40–60% fewer tokens on prompts with tabular data.

Getting started

npm install
npm run dev

Open http://localhost:5173 and paste a prompt containing JSON.

Built with

License

MIT

About

Paste a prompt containing JSON, convert to TOON format, see token savings across LLM models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors