A powerful AI-driven tool for analyzing, optimizing, and improving prompts across multiple language models. Built with Next.js and FastAPI, featuring specialized AI agents for comprehensive prompt analysis.
- Instruction Extraction: Identifies mandatory instructions within prompts
- General Issue Detection: Finds clarity, structure, and effectiveness problems
- Contradiction Detection: Spots conflicting or contradictory requirements
- Format Checking: Validates prompt structure and formatting
- Automated Rewriting: AI-powered prompt optimization based on detected issues
- Improvement Suggestions: Detailed recommendations for better prompt performance
- Multi-Model Support: Works with GPT-4, Claude, Gemini, and Moonshot models
- Interactive Results: Tabbed interface showing analysis, issues, and optimized versions
- Real-time Feedback: Live analysis with progress indicators
- Export Options: Multiple export formats (Plain Text, Markdown, Python code)
- Diff Visualization: Side-by-side comparison of original vs optimized prompts
- Responsive Design: Works seamlessly across desktop and mobile devices
- Dark/Light Mode: Theme switching with system preference detection
- Smooth Animations: GSAP-powered animations and transitions
- Accessibility: Built with Radix UI for full accessibility compliance
frontend/
βββ app/ # Next.js App Router pages
βββ components/ # React components
β βββ ui/ # shadcn/ui components
β βββ input-view.tsx # Prompt input interface
β βββ results-view.tsx # Analysis results display
β βββ export-dialog.tsx # Export functionality
βββ lib/ # Utilities and types
β βββ api-client.ts # Backend API integration
β βββ types.ts # TypeScript definitions
β βββ export-utils.ts # Export functionality
βββ hooks/ # Custom React hooks
api/
βββ src/
β βββ app.py # FastAPI application
β βββ custom_agents.py # AI agent definitions
β βββ models/ # Pydantic data models
β β βββ instruction.py # Instruction extraction models
β β βββ issue.py # Issue detection models
β β βββ improvement.py # Improvement models
β β βββ prompt.py # Prompt rewriting models
β βββ utils.py # Utility functions
βββ requirements.txt # Python dependencies
- Framework: Next.js 15 with App Router
- Runtime: React 19
- Language: TypeScript
- Styling: Tailwind CSS
- Components: Radix UI + shadcn/ui
- Animations: GSAP
- Icons: Lucide React
- Framework: FastAPI
- Language: Python 3.8+
- AI Framework: OpenAI Agents
- Model Support: litellm (OpenAI, Anthropic, Google, Moonshot)
- Validation: Pydantic
- CORS: FastAPI middleware
- Node.js 18+ and pnpm
- Python 3.8+
- API keys for supported model providers
cd frontend
pnpm install
pnpm devThe frontend will be available at http://localhost:3000
cd api
pip install -r requirements.txt
# Set up environment variables
cp .env.example .env
# Add your API keys to .env
python src/app.pyThe API will be available at http://localhost:4000
Create a .env file in the api directory:
# OpenAI
OPENAI_API_KEY=your_openai_api_key
# Anthropic (Optional)
ANTHROPIC_API_KEY=your_anthropic_api_key
# Google (Optional)
GOOGLE_API_KEY=your_google_api_key
# Moonshot (Optional)
MOONSHOT_API_KEY=your_moonshot_api_key
# Frontend URL (for CORS)
FRONTEND_URL=http://localhost:3000- Input Prompt: Enter your prompt in the text area
- Select Model: Choose from available AI models (GPT-4, Claude, Gemini, etc.)
- Analyze: Click "Analyze Prompt" to start the multi-agent analysis
- Review Results: Examine extracted instructions, identified issues, and suggestions
- Export: Download optimized prompts in your preferred format
curl -X POST "http://localhost:4000/api/analyze" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o",
"prompt": "Your prompt text here"
}'{
"changes": true,
"optimized_prompt": "Optimized version of your prompt...",
"extracted_instructions": {
"instructions": [
{
"instruction_title": "Main Task",
"extracted_instruction": "Specific instruction text..."
}
]
},
"general_issues": {
"has_issues": true,
"issues": [
{
"issue": "Issue description",
"priority": "high",
"snippet": "Problematic text",
"explanation": "Why this is an issue",
"suggestion": "How to fix it"
}
]
},
"improvements": {
"improvements": [
{
"description": "Improvement description"
}
]
}
}cd frontend
pnpm dev # Start development server
pnpm build # Build for production
pnpm lint # Run ESLint
pnpm start # Serve production buildcd api
python src/app.py # Start development server with reload- Frontend: Modern React app with TypeScript, using Next.js App Router
- Backend: Async FastAPI server with rate limiting and multi-model support
- AI Agents: Specialized agents for different analysis tasks
- Data Flow: Parallel agent execution β structured analysis β optimization
The frontend is configured for static export and can be deployed to any static hosting service.
For Render:
services:
- type: web
name: nextjs-static
runtime: static
buildCommand: pnpm install --no-frozen-lockfile; pnpm build
staticPublishPath: outDeploy the FastAPI backend to services like Render, Railway, or AWS:
# Production server
uvicorn src.app:app --host 0.0.0.0 --port $PORT- GPT-4.1 - Latest OpenAI model
- GPT-4o - Optimized GPT-4 variant
- o3 Mini - OpenAI's newest mini model
- Gemini 2.5 Pro/Flash - Google's latest models
- Claude 4 Sonnet - Anthropic's advanced model
- Kimi K2 - Moonshot's preview model
- Claude 4 Opus - Anthropic's most powerful model
- GPT-5 Mini/Nano - Next-generation OpenAI models
The API implements rate limiting to prevent abuse:
- Limit: 10 requests per 10 minutes per model
- Scope: Per model basis
- Response: HTTP 429 when limit exceeded
- API Key Security: All API keys stored securely in environment variables
- Rate Limiting: Prevents API abuse and excessive usage
- CORS Configuration: Restricts frontend access to authorized domains
- No Data Storage: Prompts are processed in real-time, not stored
MIT License - see LICENSE file for details
Built with β€οΈ using Next.js, FastAPI, and AI agents
