Generate production-ready infrastructure from plain English descriptions
Text2IaC transforms natural language descriptions into production-ready infrastructure code. Simply describe what you need in plain English, and get:
- Terraform modules for cloud infrastructure
- Docker Compose files for local development
- Kubernetes manifests for container orchestration
- Monitoring setup with Prometheus/Grafana
- CI/CD pipelines with GitHub Actions
- π§ Email Integration - Send requests via email
- π Web Interface - User-friendly dashboard
- π€ AI-Powered - Uses Mistral 7B locally via Ollama
- π Fully Local - No external dependencies or data sharing
- β‘ Fast Setup - Running in 5 minutes
- π¨ Template-Based - Reusable, tested patterns
- Docker & Docker Compose
- 8GB RAM (for LLM)
- 20GB disk space
git clone https://github.com/devopsterminal/text2iac-platform.git
cd text2iac-platform
# Copy environment template
cp .env.example .env
# Start all services
make start
# Monitor Ollama logs
docker-compose logs -f ollama
# Should see: "Mistral 7B model loaded successfully"
# Send test email (if SMTP configured)
echo "Create a Node.js API with PostgreSQL database" | \
mail -s "[TEXT2IAC] Test API" infrastructure@localhost
# Open web interface
open http://localhost:8080
# Or manually navigate to http://localhost:8080
Subject: [TEXT2IAC] User Management API
Create a Node.js REST API with:
- User authentication (JWT)
- PostgreSQL database
- Redis caching
- Auto-scaling setup
- Basic monitoring
Expected traffic: 1000 requests/hour
Environment: Production
Generated Output:
- β Terraform AWS infrastructure
- β Docker Compose for local dev
- β Kubernetes manifests
- β Monitoring dashboard
- β CI/CD pipeline
Build an e-commerce platform with:
- Product catalog (Elasticsearch)
- Shopping cart (Redis)
- Payment processing (Stripe integration)
- Order management (PostgreSQL)
- Admin dashboard
- Real-time analytics
Generated Output:
- β Microservices architecture
- β API Gateway setup
- β Database migrations
- β Load balancer configuration
- β Security best practices
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Email ββββββ Text2IaC ββββββ Generated β
β Integration β β API Server β β Infrastructure β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Web Interface ββββββ Mistral 7B ββββββ Templates β
β Dashboard β β (via Ollama) β β & Examples β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
Send infrastructure requests to [email protected]
:
To: [email protected]
Subject: [TEXT2IAC] Project Name
Describe your infrastructure needs in plain English...
Access the web dashboard at http://localhost:8080
:
- π Text input form
- π― Template gallery
- π Status tracking
- π Request history
Direct API access for programmatic use:
# Generate infrastructure
curl -X POST http://localhost:3001/api/generate \
-H "Content-Type: application/json" \
-d '{"description": "Create a blog with comments", "environment": "production"}'
# Check status
curl http://localhost:3001/api/status/{request_id}
# Core Settings
OLLAMA_MODEL=mistral:7b
API_PORT=3001
WEB_PORT=8080
# Email Configuration (optional)
SMTP_HOST=mail.company.com
[email protected]
SMTP_PASS=your-password
IMAP_HOST=mail.company.com
# Database
DB_HOST=postgres
DB_NAME=text2iac
DB_USER=text2iac
DB_PASS=secure-password
# Security
JWT_SECRET=your-jwt-secret
API_KEY=your-api-key
If you want email integration, configure SMTP/IMAP:
-
Gmail/Google Workspace:
SMTP_HOST=smtp.gmail.com IMAP_HOST=imap.gmail.com [email protected] SMTP_PASS=app-specific-password
-
Microsoft Exchange:
SMTP_HOST=smtp.office365.com IMAP_HOST=outlook.office365.com
-
Internal Mail Server:
SMTP_HOST=mail.company.internal IMAP_HOST=mail.company.internal
# Check all services
make health-check
# Individual service health
curl http://localhost:3001/health # API Server
curl http://localhost:8080/health # Web Interface
curl http://localhost:11434/api/ps # Ollama LLM
# View all logs
docker-compose logs -f
# Specific service logs
docker-compose logs -f api
docker-compose logs -f email-bridge
docker-compose logs -f ollama
- π Request metrics:
http://localhost:3001/metrics
- π§ System metrics:
http://localhost:9090
(if Prometheus enabled) - π Dashboards:
http://localhost:3000
(if Grafana enabled)
# Start in development mode
make dev
# Run tests
make test
# Code formatting
make format
# Type checking
make lint
- Create template in
templates/
directory - Add to template registry in
api/src/services/template.service.ts
- Test with example request
- Update documentation
Modify LLM prompts in config/prompts/
:
system-prompt.txt
- Base instructions for LLMterraform-prompt.txt
- Terraform-specific guidancekubernetes-prompt.txt
- Kubernetes-specific guidance
# Production deployment
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up -d
# With monitoring stack
docker-compose -f docker-compose.yml -f docker-compose.monitoring.yml up -d
# Apply manifests
kubectl apply -f k8s/
# Check status
kubectl get pods -n text2iac
- AWS: Use ECS or EKS with provided configurations
- Azure: Use Container Instances or AKS
- GCP: Use Cloud Run or GKE
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open a Pull Request
See CONTRIBUTING.md for detailed guidelines.
- π Getting Started Guide
- ποΈ Architecture Overview
- π API Reference
- π Deployment Guide
- π― Examples & Use Cases
LLM not responding:
# Check Ollama status
curl http://localhost:11434/api/ps
# Restart Ollama
docker-compose restart ollama
Email not working:
# Check email bridge logs
docker-compose logs email-bridge
# Test SMTP connection
telnet $SMTP_HOST 587
Web interface not loading:
# Check frontend logs
docker-compose logs frontend
# Verify API connection
curl http://localhost:3001/health
For better LLM performance:
- Increase Docker memory limit to 12GB+
- Use GPU if available (NVIDIA Docker runtime)
- Consider faster models like CodeLlama 7B
For high request volume:
- Scale API service (
docker-compose up --scale api=3
) - Add Redis caching
- Use load balancer (Nginx/HAProxy)
- β Email integration
- β Web interface
- β Basic templates (Terraform, Docker Compose)
- β Local LLM (Mistral 7B)
- π² Slack/Teams integration
- π² Template gallery
- π² Request history
- π² User authentication
- π² Backstage plugin
- π² ArgoCD integration
- π² Multi-cloud support
- π² Advanced monitoring
This project is licensed under the Apache License - see the LICENSE file for details.
- Ollama - Local LLM runtime
- Mistral AI - Language model
- Terraform - Infrastructure as Code
- Docker - Containerization
- π§ Email: [email protected]
- π¬ Slack: #text2iac-support
- π Issues: GitHub Issues
- π Documentation: Wiki
Made with β€οΈ by the Platform Engineering Team