Skip to content

wtlow003/hemingway

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Hemingway

A powerful AI-driven tool for analyzing, optimizing, and improving prompts across multiple language models. Built with Next.js and FastAPI, featuring specialized AI agents for comprehensive prompt analysis.

Prompt Optimizer

πŸš€ Features

πŸ” Multi-Agent Analysis

  • Instruction Extraction: Identifies mandatory instructions within prompts
  • General Issue Detection: Finds clarity, structure, and effectiveness problems
  • Contradiction Detection: Spots conflicting or contradictory requirements
  • Format Checking: Validates prompt structure and formatting

🎯 Intelligent Optimization

  • Automated Rewriting: AI-powered prompt optimization based on detected issues
  • Improvement Suggestions: Detailed recommendations for better prompt performance
  • Multi-Model Support: Works with GPT-4, Claude, Gemini, and Moonshot models

πŸ“Š Rich Analysis Interface

  • Interactive Results: Tabbed interface showing analysis, issues, and optimized versions
  • Real-time Feedback: Live analysis with progress indicators
  • Export Options: Multiple export formats (Plain Text, Markdown, Python code)
  • Diff Visualization: Side-by-side comparison of original vs optimized prompts

🎨 Modern UI/UX

  • Responsive Design: Works seamlessly across desktop and mobile devices
  • Dark/Light Mode: Theme switching with system preference detection
  • Smooth Animations: GSAP-powered animations and transitions
  • Accessibility: Built with Radix UI for full accessibility compliance

πŸ—οΈ Architecture

Frontend (Next.js)

frontend/
β”œβ”€β”€ app/                    # Next.js App Router pages
β”œβ”€β”€ components/             # React components
β”‚   β”œβ”€β”€ ui/                # shadcn/ui components
β”‚   β”œβ”€β”€ input-view.tsx     # Prompt input interface
β”‚   β”œβ”€β”€ results-view.tsx   # Analysis results display
β”‚   └── export-dialog.tsx  # Export functionality
β”œβ”€β”€ lib/                   # Utilities and types
β”‚   β”œβ”€β”€ api-client.ts      # Backend API integration
β”‚   β”œβ”€β”€ types.ts           # TypeScript definitions
β”‚   └── export-utils.ts    # Export functionality
└── hooks/                 # Custom React hooks

Backend (FastAPI)

api/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ app.py             # FastAPI application
β”‚   β”œβ”€β”€ custom_agents.py   # AI agent definitions
β”‚   β”œβ”€β”€ models/            # Pydantic data models
β”‚   β”‚   β”œβ”€β”€ instruction.py # Instruction extraction models
β”‚   β”‚   β”œβ”€β”€ issue.py       # Issue detection models
β”‚   β”‚   β”œβ”€β”€ improvement.py # Improvement models
β”‚   β”‚   └── prompt.py      # Prompt rewriting models
β”‚   └── utils.py           # Utility functions
└── requirements.txt       # Python dependencies

πŸ› οΈ Tech Stack

Frontend

  • Framework: Next.js 15 with App Router
  • Runtime: React 19
  • Language: TypeScript
  • Styling: Tailwind CSS
  • Components: Radix UI + shadcn/ui
  • Animations: GSAP
  • Icons: Lucide React

Backend

  • Framework: FastAPI
  • Language: Python 3.8+
  • AI Framework: OpenAI Agents
  • Model Support: litellm (OpenAI, Anthropic, Google, Moonshot)
  • Validation: Pydantic
  • CORS: FastAPI middleware

πŸ“¦ Installation

Prerequisites

  • Node.js 18+ and pnpm
  • Python 3.8+
  • API keys for supported model providers

Frontend Setup

cd frontend
pnpm install
pnpm dev

The frontend will be available at http://localhost:3000

Backend Setup

cd api
pip install -r requirements.txt

# Set up environment variables
cp .env.example .env
# Add your API keys to .env

python src/app.py

The API will be available at http://localhost:4000

Environment Variables

Create a .env file in the api directory:

# OpenAI
OPENAI_API_KEY=your_openai_api_key

# Anthropic (Optional)
ANTHROPIC_API_KEY=your_anthropic_api_key

# Google (Optional)
GOOGLE_API_KEY=your_google_api_key

# Moonshot (Optional)
MOONSHOT_API_KEY=your_moonshot_api_key

# Frontend URL (for CORS)
FRONTEND_URL=http://localhost:3000

πŸš€ Usage

Basic Workflow

  1. Input Prompt: Enter your prompt in the text area
  2. Select Model: Choose from available AI models (GPT-4, Claude, Gemini, etc.)
  3. Analyze: Click "Analyze Prompt" to start the multi-agent analysis
  4. Review Results: Examine extracted instructions, identified issues, and suggestions
  5. Export: Download optimized prompts in your preferred format

API Usage

Analyze Prompt

curl -X POST "http://localhost:4000/api/analyze" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "prompt": "Your prompt text here"
  }'

Response Format

{
  "changes": true,
  "optimized_prompt": "Optimized version of your prompt...",
  "extracted_instructions": {
    "instructions": [
      {
        "instruction_title": "Main Task",
        "extracted_instruction": "Specific instruction text..."
      }
    ]
  },
  "general_issues": {
    "has_issues": true,
    "issues": [
      {
        "issue": "Issue description",
        "priority": "high",
        "snippet": "Problematic text",
        "explanation": "Why this is an issue",
        "suggestion": "How to fix it"
      }
    ]
  },
  "improvements": {
    "improvements": [
      {
        "description": "Improvement description"
      }
    ]
  }
}

πŸ”§ Development

Frontend Development

cd frontend
pnpm dev          # Start development server
pnpm build        # Build for production
pnpm lint         # Run ESLint
pnpm start        # Serve production build

Backend Development

cd api
python src/app.py     # Start development server with reload

Project Structure

  • Frontend: Modern React app with TypeScript, using Next.js App Router
  • Backend: Async FastAPI server with rate limiting and multi-model support
  • AI Agents: Specialized agents for different analysis tasks
  • Data Flow: Parallel agent execution β†’ structured analysis β†’ optimization

🌐 Deployment

Frontend (Render/Vercel)

The frontend is configured for static export and can be deployed to any static hosting service.

For Render:

services:
- type: web
  name: nextjs-static
  runtime: static
  buildCommand: pnpm install --no-frozen-lockfile; pnpm build
  staticPublishPath: out

Backend (Python Hosting)

Deploy the FastAPI backend to services like Render, Railway, or AWS:

# Production server
uvicorn src.app:app --host 0.0.0.0 --port $PORT

🀝 Supported Models

Available Models

  • GPT-4.1 - Latest OpenAI model
  • GPT-4o - Optimized GPT-4 variant
  • o3 Mini - OpenAI's newest mini model
  • Gemini 2.5 Pro/Flash - Google's latest models
  • Claude 4 Sonnet - Anthropic's advanced model
  • Kimi K2 - Moonshot's preview model

Coming Soon

  • Claude 4 Opus - Anthropic's most powerful model
  • GPT-5 Mini/Nano - Next-generation OpenAI models

πŸ“Š Rate Limiting

The API implements rate limiting to prevent abuse:

  • Limit: 10 requests per 10 minutes per model
  • Scope: Per model basis
  • Response: HTTP 429 when limit exceeded

πŸ›‘οΈ Security & Privacy

  • API Key Security: All API keys stored securely in environment variables
  • Rate Limiting: Prevents API abuse and excessive usage
  • CORS Configuration: Restricts frontend access to authorized domains
  • No Data Storage: Prompts are processed in real-time, not stored

πŸ“„ License

MIT License - see LICENSE file for details


Built with ❀️ using Next.js, FastAPI, and AI agents

About

Agentic Prompt Optimizer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published