Transform draft prompts into production-grade instructions tailored to Gemini, Claude, ChatGPT, or Llama—directly from a focused React + Vite workspace.
- Overview
- Key Capabilities
- Architecture
- Project Structure
- Getting Started
- Configuration Reference
- Usage Workflow
- Testing & Quality
- Tech Stack
- Roadmap
- License
- Maintainer
LLM Prompt Optimizer is a single-page React application that rewrites user prompts using provider-specific optimization frameworks. It persists preferences in local storage, guides users through template-based drafting, and calls either Google Gemini or a user-specified OpenAI-compatible endpoint to deliver deterministic, structured prompts for downstream work.
- Template-first ideation: Built-in prompt templates (content, code, SQL, marketing) seed variables with double-curly placeholders for fast iteration.
- LLM-aware rewriting: Switch between Gemini, Anthropic Claude, OpenAI ChatGPT, or Meta Llama instruction sets; each path loads a bespoke system prompt crafted for that model family.
- Contextual variables: The UI extracts
{{placeholders}}and renders inline inputs so users can merge contextual data without editing the template manually. - History & favorites: Every optimization run is timestamped, saved locally, filterable via a search bar, and can be pinned for quick reuse.
- Settings modal: Configure provider, temperature, API keys, and OpenAI-compatible endpoints without touching code; values persist across sessions via
localStorage. - Copy + reuse flows: Copy either the source or optimized prompt to the clipboard, or reuse a past optimization as the next input in a single click.
| Layer | Responsibilities | Key Files |
|---|---|---|
| Presentation | React components with Tailwind-inspired utility classes render the editor, templates, and history tabs. | App.tsx, SettingsModal.tsx, index.tsx, index.html |
| Domain logic | LLM selection, progress indicators, variable parsing, clipboard helpers, and persistent history management. | App.tsx, useSettings.ts |
| Services | Provider-specific prompt optimization via Google Gemini SDK or generic OpenAI-compatible HTTP calls. | services/geminiService.ts |
| Configuration & typing | Enumerations, prompt templates, reusable constants, and shared TypeScript types. | constants.tsx, types.ts, vite.config.ts, tsconfig.json |
High-level flow:
- User selects a template or writes a raw prompt.
- Variable placeholders materialize as inputs; values hydrate the final string before the API call.
- Settings modal supplies API credentials and sampling temperature.
optimizePromptbuilds a provider-specific system instruction and calls Gemini or an OpenAI-compatible endpoint (Claude, ChatGPT, Llama wrappers).- The rewritten prompt is rendered, saved to history, and optionally marked as favorite or copied.
.
├── App.tsx # Main UI and workflow logic
├── SettingsModal.tsx # Provider + temperature configuration modal
├── constants.tsx # LLM options and prompt templates
├── services/
│ └── geminiService.ts # Provider-specific optimization service
├── types.ts # Shared enums and domain types
├── useSettings.ts # LocalStorage-backed settings hook
├── index.tsx / index.html # Vite entry point
├── docs/ # Documentation & showcase assets (webpage lives here)
├── package.json # Scripts and dependency manifest
└── vite.config.ts # Build tooling
- Node.js 18+ and npm 9+
- A Google Gemini API key (required by default) and/or an OpenAI-compatible API key if you plan to switch providers.
git clone https://github.com/nsalvacao/llm-prompt-optimizer.git
cd llm-prompt-optimizer
npm installnpm run dev # Starts Vite with hot module reload on http://localhost:5173
npm run build # Produces an optimized bundle in dist/
npm run preview # Serves the production build locally| Setting | Where to configure | Notes |
|---|---|---|
Gemini API Key |
Settings modal (provider: gemini) or .env variable API_KEY read during build |
Required for default Gemini mode; stored locally only if user provides it in the modal. |
OpenAI API Key |
Settings modal when provider: openai |
Mandatory when switching to ChatGPT-compatible mode. |
OpenAI Base URL |
Settings modal (https://api.openai.com/v1 by default) |
Point this to any compatible endpoint (e.g., Azure OpenAI, LocalAI). |
OpenAI Model |
Settings modal (gpt-4o default) |
Set to the deployed model ID. |
Temperature |
Slider inside the modal | Shared across providers; persisted in localStorage. |
To preload secrets during development, you can export them before running Vite:
export API_KEY="<your_gemini_key>"
npm run dev- Launch the app (
npm run dev) and open the Settings modal to confirm provider, API keys, and temperature. - Pick a template or paste your own prompt. Any
{{variable}}tokens automatically surface as inline inputs—fill them to hydrate the prompt. - Choose the target LLM; the UI highlights the active provider and loads its optimization framework.
- Click Optimize. A progress bar animates while
services/geminiService.tsissues the API call and enforces anti-hallucination guardrails. - Review the optimized prompt, copy it, or store it as a favorite. History entries can be searched, filtered, reused, or toggled between “All” and “Favorites”.
npm run testruns the Vitest suite (seeApp.test.tsxfor starter coverage).- Add component-level tests alongside their modules (e.g.,
App.test.tsx). - Manual QA checklist:
- Verify provider switching (Gemini ↔ OpenAI) and API validation messages.
- Confirm placeholders expand correctly after editing template text.
- Ensure history persists between reloads and favorites stay pinned.
- Framework: React 19 + TypeScript, bundled by Vite 6.
- UI Patterns: Utility-first CSS classes (Tailwind-style) embedded directly in JSX.
- AI SDKs:
@google/genaifor Gemini; nativefetchfor OpenAI-compatible chat completions. - State & Storage: React hooks +
localStoragefor history and settings persistence. - Testing: Vitest + Testing Library + jsdom.
- Add granular provider adapters (e.g., Anthropic SDK, Groq) instead of routing everything through Gemini/OpenAI code paths.
- Persist history/favorites to IndexedDB or a lightweight backend for multi-device continuity.
- Ship downloadable prompt packs and allow importing/exporting history as JSON.
- Integrate linting/formatting (ESLint + Prettier) and add CI via GitHub Actions.
Distributed under the MIT License.
Created and maintained by Nuno Salvação ([email protected]). Contributions and feedback are welcome via GitHub issues or pull requests.