AI-powered code planning and error analysis for TypeScript/JavaScript projects
CodePlanner CLI is an intelligent development tool that uses AI to help you plan implementations and debug errors in your codebase. It combines AST parsing, semantic search, and LLM-powered analysis to provide actionable insights and step-by-step guidance.
- 🧠 AI-Powered Planning: Generate detailed implementation plans based on your codebase context
- 🐛 Intelligent Error Analysis: Get step-by-step debugging guidance for compiler, runtime, and linter errors
- 📚 Semantic Code Indexing: Index your codebase for intelligent code search and context understanding
- ⚡ Real-time Streaming: Get responses streamed in real-time for better user experience
- 🔍 Context-Aware: Uses your actual codebase to provide relevant suggestions and examples
CodePlanner CLI is built with a modular architecture:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ CLI Client │ │ WebSocket │ │ CodePlanner │
│ │◄──►│ Gateway │◄──►│ Engine │
│ - plan │ │ │ │ │
│ - analyze-error│ │ - Message │ │ - AST Parser │
│ - index │ │ Routing │ │ - Embeddings │
└─────────────────┘ └─────────────────┘ │ - Vector Store │
│ - Plan Gen │
│ - Error Analysis│
└─────────────────┘
│
▼
┌─────────────────┐
│ Redis │
│ │
│ - Job Queue │
│ - Vector Store │
│ - Pub/Sub │
└─────────────────┘
- CLI Client: Command-line interface for user interactions
- WebSocket Gateway: Real-time communication hub
- CodePlanner Engine: Core processing engine with AST parsing, embeddings, and LLM integration
- Redis: Message broker and vector storage backend
Prerequisites:
- Docker and Docker Compose
- OpenAI API Key
Setup:
# Clone the repository
git clone <repository-url>
cd codeplanner-cli
# Setup environment
cp docker/docker.env .env
# Edit .env and add your OpenAI API key
# Start all services
./docker/start-codeplanner.sh
# Use the CLI
docker-compose -f docker/docker-compose.yml exec cli bashQuick Commands:
# Index codebase
docker-compose -f docker/docker-compose.yml exec cli \
bun packages/cli/src/index.ts index -p examples/sample-project
# Generate plan
docker-compose -f docker/docker-compose.yml exec cli \
bun packages/cli/src/index.ts plan "Add user authentication" -p examples/sample-project📖 See DOCKER.md for complete Docker setup guide
Prerequisites:
- Bun >= 1.0
- Docker
- OpenAI API Key
Setup:
# Clone and install dependencies
git clone <repository-url>
cd codeplanner-cli
bun install
# Configure environment
cp .env.example .env
# Edit .env and add your OpenAI API key
# Start services
bun run docker:up
bun run dev:gateway # Terminal 1
bun run dev:worker # Terminal 2
# Use the CLI
bun run cli index -p ./your-project
bun run cli plan "Add user authentication" -p ./your-projectBefore using planning or error analysis, index your codebase:
bun run cli index -p ./your-projectThis will:
- Parse your TypeScript/JavaScript files using AST analysis
- Extract functions, classes, and interfaces
- Generate embeddings for semantic search
- Store everything in Redis for fast retrieval
Create detailed implementation plans for new features:
bun run cli plan "Add JWT authentication middleware" -p ./your-projectThe plan will include:
- High-level architecture overview
- Step-by-step implementation instructions
- Code examples and file changes
- Testing strategies
- Potential challenges and solutions
Get intelligent debugging help for various error types:
# Compiler errors
bun run cli analyze-error -t compiler -p ./your-project
# Then paste your TypeScript error
# Runtime errors
bun run cli analyze-error -t runtime -p ./your-project
# Then paste your stack trace
# Linter errors
bun run cli analyze-error -t linter -p ./your-project
# Then paste your linter output| Variable | Description | Default |
|---|---|---|
OPENAI_API_KEY |
Your OpenAI API key | Optional (or use OPENROUTER_API_KEY) |
OPENROUTER_API_KEY |
Your OpenRouter API key | Optional (or use OPENAI_API_KEY) |
REDIS_URL |
Redis connection URL | redis://localhost:6379 |
WS_PORT |
WebSocket gateway port | 3000 |
LLM_BASE_URL |
Override base URL for LLM provider (e.g., https://openrouter.ai/api/v1) |
unset |
EMBEDDING_MODEL |
Embedding model id | text-embedding-3-small |
PLANNING_MODEL |
Planning model id | gpt-4-turbo-preview |
DEBUG_MODEL |
Debugger model id | PLANNING_MODEL |
BATCH_SIZE |
Embedding batch size | 20 |
MAX_CONTEXT_CHUNKS |
Max relevant context chunks | 15 |
TEMPERATURE |
Sampling temperature | 0.3 |
EMBEDDING_MODEL |
OpenAI embedding model | text-embedding-3-small |
PLANNING_MODEL |
OpenAI planning model | gpt-4-turbo-preview |
Create a .codeplannerrc file in your project root:
{
"openaiApiKey": "sk-your-api-key-here",
"endpoint": "ws://localhost:3000",
"embeddingModel": "text-embedding-3-small",
"planningModel": "gpt-4-turbo-preview",
"userId": "user1",
"projectId": "my-project",
"maxContextChunks": 15,
"batchSize": 20,
"temperature": 0.3
}Run the complete test suite:
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"
# Run the test flow
bun run test:flowThis will test:
- Codebase indexing
- Plan generation
- Error analysis
- End-to-end workflow
codeplanner-cli/
├── packages/
│ ├── cli/ # CLI frontend
│ │ ├── src/
│ │ │ ├── commands/ # CLI commands
│ │ │ ├── client/ # WebSocket client
│ │ │ └── utils/ # Utilities
│ │ └── package.json
│ │
│ ├── gateway/ # WebSocket gateway
│ │ ├── src/
│ │ │ ├── server.ts # WebSocket server
│ │ │ ├── redis.ts # Redis client
│ │ │ └── types.ts # Gateway types
│ │ └── package.json
│ │
│ ├── engine/ # CodePlanner engine
│ │ ├── src/
│ │ │ ├── parser/ # AST parsing
│ │ │ ├── embeddings/ # Embedding generation
│ │ │ ├── vector-store/ # Vector storage
│ │ │ ├── planner/ # Plan generation
│ │ │ ├── error-analysis/ # Error analysis
│ │ │ └── worker.ts # Main worker
│ │ └── package.json
│ │
│ └── shared/ # Shared types
│ ├── src/types.ts
│ └── package.json
│
├── examples/
│ └── sample-project/ # Test project
│
├── docker/
│ └── docker-compose.yml # Redis service
│
├── test-flow.sh # Test script
├── .codeplannerrc.example # Config template
├── env.example # Environment template
├── QUICKSTART.md # Quick start guide
└── README.md # This file
- Uses
ts-morphto parse TypeScript/JavaScript files - Extracts functions, classes, interfaces, and types
- Chunks large files for better embedding quality
- Generates embeddings using OpenAI's
text-embedding-3-small - Stores vectors in Redis for fast similarity search
- Finds relevant code based on semantic meaning, not just keywords
- Uses GPT-4 to generate implementation plans
- Considers your actual codebase context
- Provides specific, actionable steps with code examples
- Parses various error types (compiler, runtime, linter)
- Uses semantic search to find related code
- Generates step-by-step debugging plans with fixes
- Single user mode (no authentication)
- Basic vector search (cosine similarity)
- Full reindexing only (no incremental updates)
- Limited error type support
- No caching (every request hits LLM)
- CLI-only interface
- ✅ Core MVP with basic functionality
- ✅ AST parsing and embeddings
- ✅ Plan generation and error analysis
- ✅ CLI interface
- 🔄 Redis Vector Sets for better search
- 🔄 Incremental indexing with file watchers
- 🔄 Multi-language support (Python, Go, etc.)
- 🔄 VS Code extension
- 🔄 Team collaboration features
- 🔄 Cost tracking dashboard
- 🔄 Custom model support
- 🔄 Integration with CI/CD pipelines
- 🔄 Advanced analytics and insights
We welcome contributions! Please see our contributing guidelines for details.
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
# Install dependencies
bun install
# Start all services in development mode
bun run dev:gateway # Terminal 1
bun run dev:worker # Terminal 2
bun run dev:cli # Terminal 3This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI for providing the AI models
- ts-morph for TypeScript AST manipulation
- Bun for the fast JavaScript runtime
- Redis for vector storage and message queuing
Happy coding with CodePlanner! 🎉