A modern, full-stack social media application for creating and sharing posts with real-time voting and interactions. Built with Flask (backend) and Next.js (frontend) deployed on Kubernetes.
- Staging: https://power-ranger-staging.kubernetes.devops.cs.ut.ee/
- Production: https://power-ranger-prod.kubernetes.devops.cs.ut.ee/
- Infrastructure: ETAIS VM (4 vCPU, 8GB RAM, 100GB Storage)
- Overview
- Features
- Tech Stack
- Architecture
- Quick Start (Local Development)
- API Documentation
- Deployment
- CI/CD Pipeline
- Project Structure
- Configuration
- Troubleshooting
This project is a Reddit-style post sharing platform with:
- User authentication and profiles
- Post creation with image uploads (up to 10MB)
- Real-time voting system via WebSocket
- Privacy controls (public/private/anonymous posts)
- Responsive design with Tailwind CSS
- Production-ready Kubernetes deployment
This project was developed with assistance from AI tools for:
- Generating boilerplate code and component structures
- Creating UI components with Tailwind CSS
- Debugging and troubleshooting
- CI/CD pipeline configurations
- β User registration and JWT-based authentication
- β Profile management with password change
- β Create posts with text and optional images
- β Upvote/downvote posts with real-time updates
- β Privacy controls (public/private/anonymous)
- β Real-time WebSocket notifications
- β Responsive grid layout
- β XSS protection with input sanitization
- β CORS support for cross-origin requests
- β HttpOnly cookie authentication
- β Image upload with size validation
- β Automated CI/CD with GitHub Actions
- β Kubernetes deployment with Helm
- β Database high availability with CloudNativePG
- β Prometheus monitoring integration
- Python: 3.11
- Framework: Flask 3.x
- Database: PostgreSQL 17
- ORM: Flask-SQLAlchemy
- Authentication: Flask-JWT-Extended
- Real-time: Flask-SocketIO, Socket.io
- Testing: pytest with coverage
- Framework: Next.js 15.5.9 (App Router + Turbopack)
- React: 19
- Styling: Tailwind CSS
- HTTP Client: Axios
- Real-time: Socket.io-client
- Linting: ESLint, Prettier
- Containers: Docker
- Orchestration: Kubernetes (ETAIS Cloud)
- Package Manager: Helm
- CI/CD: GitHub Actions
- Database HA: CloudNativePG
- Ingress: NGINX Ingress Controller
- Monitoring: Prometheus, ServiceMonitor
- Security Scanning: Trivy
βββββββββββββββ HTTPS ββββββββββββββββ
β Browser β ββββββββββββββββΊ β Next.js:3000 β
βββββββββββββββ ββββββββ¬ββββββββ
β Proxy /api/*
βΌ
βββββββββββββββββββ
β Flask API:5000 β
ββββββββββ¬βββββββββ
β
ββββββββββββββΌβββββββββββββ
βΌ βΌ βΌ
ββββββββββββ ββββββββββββ ββββββββββββ
βPostgreSQLβ βSocketIO β β Uploads β
β :5432 β βWebSocket β β /images/ β
ββββββββββββ ββββββββββββ ββββββββββββ
- Frontend: User interacts with Next.js UI on port 3000
- API Proxy: Next.js proxies
/api/*requests to backend - Backend: Flask API handles requests on port 5000
- Database: PostgreSQL stores users, posts, votes
- Real-time: SocketIO broadcasts updates to all clients
- Images: Served from backend
/image/<filename>endpoint
- Access Token Expiration: 15 minutes
- Refresh Token Expiration: 30 days
- Max File Size: 10 MB
- Allowed Formats: Images (jpg, png, gif, etc.)
- Storage: Local filesystem (
uploads/directory)
- Port: 5434 (PostgreSQL local), 5432 (production)
- Password Hashing: PBKDF2-SHA256 via Werkzeug (
generate_password_hash/check_password_hash) - XSS Protection: HTML escape on all user inputs using Python's
html.escape() - CORS: Enabled for frontend integration with credentials support
POST /register- Register a new user accountPOST /login- Authenticate user and receive JWT cookiesPOST /logout- Clear authentication cookies (requires auth)GET /me- Get current authenticated user details (requires auth)POST /refresh- Refresh access token using refresh token cookiePOST /change-password- Change user password (requires auth)
POST /posts- Create a new post with optional image (requires auth)GET /posts- List all accessible postsPATCH /privacy- Update post privacy settings (requires auth)
POST /vote- Vote on a post (upvote/downvote, requires auth)
GET /image/<filename>- Retrieve uploaded images
- Python 3.11+ (added to PATH)
- Node.js 18+ and npm
- Docker Desktop running
- Git
# 1. Start Database
docker-compose -f docker-compose.local.yml up -d
# 2. Start Backend (Terminal 1)
python -m venv venv
.\venv\Scripts\Activate.ps1
pip install -r requirements.txt -r requirements-dev.txt
$env:DATABASE_URL = "postgresql+psycopg://postgres:postgres@localhost:5434/mydb"
$env:JWT_SECRET_KEY = "dev-secret-key"
$env:FRONTEND_ORIGIN = "http://localhost:3000"
python app/app.py
# 3. Start Frontend (Terminal 2)
cd frontend-main
npm install
$env:NEXT_PUBLIC_API_BASE = ""
npm run devAccess the app: http://localhost:3000
For detailed setup instructions, see Complete Local Development Setup below.
This project consists of two separate applications that need to run together:
- Backend API (Flask on port 5000)
- Frontend (Next.js on port 3000)
# Navigate to project root
cd "path\to\Reddit clone"
# Start PostgreSQL and pgAdmin using Docker
docker-compose -f docker-compose.local.yml up -d
# Verify containers are running
docker ps
# You should see: posts-db-local (port 5434) and pgadmin-local (port 5050)Database Details:
- PostgreSQL:
localhost:5434 - pgAdmin:
http://localhost:5050- Email:
admin@local.com - Password:
admin
- Email:
Terminal 1 - Backend:
# Navigate to project root
cd "path\to\Reddit clone"
# Create virtual environment (first time only)
python -m venv venv
# Activate virtual environment
.\venv\Scripts\Activate.ps1
# Note: If you get execution policy error, run:
# Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
# Install dependencies (first time only)
pip install -r requirements.txt
pip install -r requirements-dev.txt
# Set environment variables
$env:DATABASE_URL = "postgresql+psycopg://postgres:postgres@localhost:5434/mydb"
$env:JWT_SECRET_KEY = "dev-secret-key-change-in-production"
$env:FRONTEND_ORIGIN = "http://localhost:3000"
$env:FLASK_ENV = "development"
# Run the backend server
python app/app.pyExpected Output:
* Restarting with stat
* Debugger is active!
* Debugger PIN: xxx-xxx-xxx
(xxxxx) wsgi starting up on http://127.0.0.1:5000
Backend API is now running at: http://localhost:5000
Terminal 2 - Frontend (open a new terminal):
# Navigate to frontend directory
cd "path\to\Reddit clone\frontend-main"
# Install dependencies (first time only)
npm install
# Set environment variable (empty string tells Next.js to use proxy)
$env:NEXT_PUBLIC_API_BASE = ""
# Run the frontend server
npm run devExpected Output:
β² Next.js 15.5.9 (Turbopack)
- Local: http://localhost:3000
- Network: http://172.31.x.x:3000
β Ready in 1.5s
Frontend is now running at: http://localhost:3000
- Open browser: Navigate to http://localhost:3000
- Register a new account: Click "Sign Up"
- Create a post: Go to "Create Post"
- Test voting: Upvote/downvote should work in real-time
- Check backend: API calls should show 200 status codes (not 404)
- Press
Ctrl+Cin the frontend terminal
- Press
Ctrl+Cin the backend terminal
docker-compose -f docker-compose.local.yml down# Check what's using port 5000
netstat -ano | findstr :5000
# Kill the process if needed
taskkill /PID <process_id> /F- Make sure Docker Desktop is running
- Check system tray for Docker icon
- Wait 1-2 minutes for Docker to fully start
- Verify backend is running on port 5000
- Verify frontend has
NEXT_PUBLIC_API_BASE = "" - Restart frontend after setting environment variable
- Verify
FRONTEND_ORIGIN = "http://localhost:3000"is set in backend - Restart backend after setting environment variable
# Check if database container is running
docker ps | findstr posts-db-local
# View database logs
docker logs posts-db-localSave this as start-dev.ps1 in project root:
# Start database
Write-Host "Starting database..." -ForegroundColor Green
docker-compose -f docker-compose.local.yml up -d
Start-Sleep -Seconds 3
# Start backend in new window
Write-Host "Starting backend..." -ForegroundColor Green
Start-Process powershell -ArgumentList "-NoExit", "-Command", @"
cd '$PWD'
.\venv\Scripts\Activate.ps1
`$env:DATABASE_URL = 'postgresql+psycopg://postgres:postgres@localhost:5434/mydb'
`$env:JWT_SECRET_KEY = 'dev-secret-key'
`$env:FRONTEND_ORIGIN = 'http://localhost:3000'
`$env:FLASK_ENV = 'development'
python app/app.py
"@
Start-Sleep -Seconds 5
# Start frontend in new window
Write-Host "Starting frontend..." -ForegroundColor Green
Start-Process powershell -ArgumentList "-NoExit", "-Command", @"
cd '$PWD\frontend-main'
`$env:NEXT_PUBLIC_API_BASE = ''
npm run dev
"@
Write-Host "`nAll services starting!" -ForegroundColor Cyan
Write-Host "Frontend: http://localhost:3000" -ForegroundColor Yellow
Write-Host "Backend: http://localhost:5000" -ForegroundColor YellowRun with: .\start-dev.ps1
This project uses GitHub Actions for continuous integration and deployment.
-
Backend CI/CD (.github/workflows/backend.yml)
- Lint (Ruff), Test (pytest + coverage), Build (Docker), Security Scan (Trivy)
- Auto-deploy to staging on
mainbranch - Manual approval required for production
-
Frontend CI/CD (.github/workflows/frontend.yml)
- Lint (ESLint + Prettier), Build (Docker), Security Scan (Trivy)
- Auto-deploy to staging on
mainbranch - Manual approval required for production
-
Configure Secrets (Settings β Secrets and variables β Actions):
KUBE_SERVER: Kubernetes API URL (e.g.,https://rancher.devops.cs.ut.ee/k8s/clusters/c-m-xxxxx)KUBE_TOKEN: Service account token for staging namespaceKUBE_TOKEN_PROD: Service account token for production namespace
-
Push to trigger workflows:
git add . git commit -m "Update application" git push origin main
-
Monitor builds: Go to Actions tab in GitHub
-
Deploy to production:
- Go to Actions β Select successful workflow β Click "Review deployments" β Approve
For detailed CI/CD setup, see CI_CD_SETUP.md.
Check if your pods are still running from previous deployments:
# View staging pods
kubectl get pods -n power-ranger-staging
# View production pods
kubectl get pods -n power-ranger-prod
# Check deployment status
kubectl get deployments -n power-ranger-stagingFor detailed verification steps, see KUBERNETES_CHECK.md.
This project uses Kubernetes with Helm charts for production deployment.
- Access to Kubernetes cluster (ETAIS)
kubectlconfigured with cluster accesshelminstalled- Service account tokens (KUBE_TOKEN)
git clone https://github.com/yourusername/reddit-clone.git
cd reddit-cloneWe chose Helm over Kustomize for templating because:
- Native support for atomic rollbacks via
--atomicflag - Single command deploys with environment-specific values files
- Clean separation of configuration (values.yaml) from templates
- Built-in release management and versioning
helm/posts/
βββ Chart.yaml # Chart metadata (name, version)
βββ values.yaml # Default values
βββ values-staging.yaml # Staging-specific overrides
βββ values-prod.yaml # Production-specific overrides
βββ templates/
βββ deployment.yaml # App deployment with probes
βββ service.yaml # ClusterIP service
βββ ingress.yaml # NGINX ingress with TLS
βββ configmap.yaml # Non-secret configuration
βββ secret.yaml # Database & JWT secrets
βββ hpa.yaml # Horizontal Pod Autoscaler
βββ servicemonitor.yaml # Prometheus scraping config
βββ networkpolicy.yaml # Network security rules
βββ cnpg-cluster.yaml # CloudNativePG database
# Staging deployment
helm upgrade --install posts ./helm/posts \
-f ./helm/posts/values-staging.yaml \
--namespace power-ranger-staging
# Production deployment
helm upgrade --install posts ./helm/posts \
-f ./helm/posts/values-prod.yaml \
--namespace power-ranger-prodStrategy: Secrets are never committed to Git. They are stored securely in GitHub Secrets and injected at deploy-time.
| Secret | Storage | Injection Method |
|---|---|---|
DB_USER |
Helm values file | --set-string secrets.dbUser |
DB_PASS |
Helm values file | --set-string secrets.dbPassword |
DB_NAME |
Helm values file | --set-string secrets.dbName |
JWT_SECRET_KEY |
Helm values file | --set-string secrets.jwtSecretKey |
KUBE_TOKEN |
GitHub Secret (protected) | Kubeconfig generation |
KUBE_TOKEN_PROD |
GitHub Secret (protected) | Production kubeconfig |
KUBE_SERVER |
GitHub Secret | Kubernetes API URL |
Why this approach?
- Secrets never appear in Git history
- Different secrets per environment (staging vs prod)
- GitHub environment protection rules for production
- Minimal exposure window (only during pipeline execution)
Every deployment uses --atomic flag which ensures:
- If deployment fails health checks β automatic rollback to previous version
- No manual intervention required
- Zero downtime during failed deployments
helm upgrade --install posts ./helm/posts \
--atomic \ # β Rollback on failure
--timeout 5m # β Wait up to 5 minutes for healthEvidence in pipeline: Failed deployments show "ROLLED BACK" in GitHub Actions logs.
| Aspect | Staging | Production |
|---|---|---|
| Namespace | power-ranger-staging |
power-ranger-prod |
| URL | power-ranger-staging.kubernetes.devops.cs.ut.ee |
power-ranger-prod.kubernetes.devops.cs.ut.ee |
| Trigger | Automatic on main branch |
Manual approval required |
| Replicas | 2 | 2-5 (HPA controlled) |
| Database | CNPG 2 instances | CNPG 2 instances (HA) |
| Token | KUBE_TOKEN |
KUBE_TOKEN_PROD |
βββββββββββ βββββββββββ βββββββββββ βββββββββββββββββββ
β Test βββββΆβ Build βββββΆβ Scan βββββΆβ Deploy Staging β
β (lint) β β(buildah)β β(trivy) β β (automatic) β
βββββββββββ βββββββββββ βββββββββββ ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Deploy Prod β
β (manual trigger)β
βββββββββββββββββββ
What: Automatically scales pods based on CPU utilization.
Configuration:
autoscaling:
enabled: true
minReplicas: 2 # Never go below 2 pods
maxReplicas: 5 # Scale up to 5 pods max
targetCPUUtilizationPercentage: 50 # Scale when CPU > 50%How it works:
- Metrics Server collects CPU usage from pods
- HPA compares current usage vs target (50%)
- If usage > 50% β add pods (up to maxReplicas)
- If usage < 50% β remove pods (down to minReplicas)
Viewing HPA status:
kubectl get hpa -n power-ranger-prod
# Shows: TARGETS (current/target), REPLICAS, AGEWhat: Ensures pods are distributed across different Kubernetes nodes.
Why: If one node fails, the application stays online because pods exist on other nodes.
Configuration:
topologySpreadConstraints:
- maxSkew: 1 # Max difference between nodes
topologyKey: kubernetes.io/hostname # Spread across nodes
whenUnsatisfiable: ScheduleAnyway # Best effort if can't spread
labelSelector:
matchLabels:
app: posts-apiVerification:
kubectl get pods -n power-ranger-prod -o wide
# Shows pods running on DIFFERENT nodesWhat: Scans container images for known vulnerabilities (CVEs).
When: After every image build, before deployment.
Configuration in .gitlab-ci.yml:
trivy_image_scan:
stage: scan
image: aquasec/trivy:0.53.0
script:
- trivy image --severity HIGH,CRITICAL --exit-code 1 "$IMAGE"Behavior:
- Scans for HIGH and CRITICAL vulnerabilities
--exit-code 1β Pipeline fails if vulnerabilities found- Prevents deploying insecure images to production
We have two production dashboards providing comprehensive visibility into application health and resources.
| Panel | Purpose | Metric |
|---|---|---|
| Pods Ready / Not Ready | Chaos engineering visibility | Shows pod state changes over time |
| Service Up | Health at a glance | Indicates if service is responding |
| Active Pods | Current replica count | Number of running pods |
| Pod Restarts | Container crash tracking | changes(kube_pod_status_phase{phase="Running"}[1h]) |
| Request Latency (p95 & p50) | Golden Signal: Latency | Response time percentiles |
| Response Status Codes | Traffic breakdown | 200, 201, 401, 404 responses |
| Request Rate by Endpoint | Golden Signal: Traffic | Requests/sec per route |
| Error Count by Status Code | Golden Signal: Errors | 4xx and 5xx errors |
| Total API Requests | Traffic volume | Cumulative request count |
| HPA max replicas | Autoscaling visibility | Shows max=5 |
| HPA min replicas | Autoscaling visibility | Shows min=2 |
| Backend Pod Memory/CPU | Golden Signal: Saturation | Resource utilization |
| Panel | Purpose | Metric |
|---|---|---|
| All pod status | Complete pod overview | All pods with state history |
| Running DB | Database health | CNPG instance count (2) |
| DB Volume Usage | Storage saturation | Percentage of disk used |
| Frontend Pod Memory | Frontend resources | Memory per frontend pod |
| Frontend Pod CPU | Frontend resources | CPU cores per frontend pod |
| Backend Pod Memory | Backend resources | Memory per backend pod |
| Backend Pod CPU | Backend resources | CPU cores per backend pod |
| Posts total | Application data | Count of posts in database |
| Signal | Panel | Location |
|---|---|---|
| Traffic | Request Rate, Request Rate by Endpoint | Posts API Dashboard |
| Latency | Request Latency (p95 & p50) | Posts API Dashboard |
| Errors | Error Count by Status Code, Response Status Codes | Posts API Dashboard |
| Saturation | Pod CPU, Pod Memory, DB Volume Usage | Both Dashboards |
When a pod is killed (chaos testing), the dashboard shows:
- Pods Ready/Not Ready β Dip in ready pods, then recovery
- Active Pods β Temporary decrease, then back to normal
- Pod state changes β Visual evidence of pod termination and recreation
| Panel | Shows |
|---|---|
| HPA min replicas | Minimum pods (2) - never scales below |
| HPA max replicas | Maximum pods (5) - ceiling for scaling |
| Backend Pod CPU | CPU usage that triggers scaling decisions |
Monitoring Flow:
Request β Flask App β Prometheus Metrics β ServiceMonitor β Prometheus β Grafana
| Component | Strategy | Location |
|---|---|---|
| User/Post Data | PostgreSQL (CNPG) | Kubernetes cluster |
| Sessions | JWT Tokens (stateless) | Client-side cookies |
| File Uploads | Local filesystem* | Pod volume |
*For full statelessness, file uploads should be moved to MinIO/S3 or a ReadWriteMany volume.
- Pods can be killed/restarted without data loss
- Horizontal scaling works seamlessly
- Load balancing distributes traffic evenly
- Enables chaos engineering (random pod termination)
- Staging:
https://power-ranger-staging.kubernetes.devops.cs.ut.ee/ - Production:
https://power-ranger-prod.kubernetes.devops.cs.ut.ee/
- Readiness:
/readyz- Used by Kubernetes to check if pod can receive traffic - Liveness:
/healthz- Used by Kubernetes to check if pod should be restarted - Metrics:
/metrics- Prometheus metrics endpoint
# Check pod distribution across nodes
kubectl get pods -n power-ranger-prod -o wide
# Check HPA status
kubectl get hpa -n power-ranger-prod
# View pod logs
kubectl logs -n power-ranger-prod -l app=posts-api --tail=100
# Check CNPG database status
kubectl get cluster -n power-ranger-prodRun the following commands to maintain code quality:
ruff format # Format code
ruff check # Lint codeRun tests using pytest:
pytest -qreddit-clone/
βββ .github/
β βββ workflows/ # GitHub Actions CI/CD
β βββ backend.yml # Backend pipeline
β βββ frontend.yml # Frontend pipeline
βββ app/
β βββ __init__.py
β βββ app.py # Main Flask application
β βββ wsgi.py # WSGI entry point
β βββ tests/ # Backend unit & API tests
β βββ test_app.py
βββ frontend-main/
β βββ src/
β β βββ app/ # Next.js app router
β β βββ page.js # Home page
β β βββ layout.js # App layout
β β βββ login/ # Login page
β β βββ register/ # Register page
β β βββ create/ # Create post page
β β βββ profile/ # Profile page
β β βββ components/ # React components
β β βββ lib/ # API client & utilities
β βββ helm/frontend/ # Frontend Helm chart
β βββ Dockerfile # Frontend container build
β βββ package.json
βββ ansible/
β βββ ansible.cfg
β βββ inventory
β βββ playbooks/
β βββ templates/
βββ helm/posts/ # Backend Helm chart
β βββ Chart.yaml
β βββ values.yaml
β βββ values-staging.yaml
β βββ values-prod.yaml
β βββ templates/
β βββ deployment.yaml
β βββ service.yaml
β βββ ingress.yaml
β βββ configmap.yaml
β βββ secret.yaml
β βββ hpa.yaml
β βββ servicemonitor.yaml
β βββ networkpolicy.yaml
β βββ cnpg-cluster.yaml
βββ k8s-staging/ # Legacy manual K8s files
β βββ frontend-deployment.yaml
β βββ frontend-service.yaml
β βββ frontend-ingress.yaml
βββ k8s-prod/ # Production K8s RBAC
β βββ cicd-rbac.yaml
βββ docker-compose.yml # Production compose
βββ docker-compose.local.yml # Local development stack
βββ Dockerfile # Backend container build
βββ migrate_db.py # Database migration script
βββ requirements.txt # Python dependencies
βββ requirements-dev.txt # Development dependencies
βββ pyproject.toml # Ruff & tooling config
βββ CI_CD_SETUP.md # GitHub Actions setup guide
βββ KUBERNETES_CHECK.md # K8s verification guide
βββ README.md # This file
id: Primary keyusername: Unique usernameemail: Unique email addresspassword_hash: Hashed passwordcreated_at: Registration timestamp
id: Primary keytitle: Post titledescription: Post contentimage_url: Optional image pathuser_id: Foreign key to usersis_private: Privacy flagcreated_at: Creation timestamp
id: Primary keypost_id: Foreign key to postsuser_id: Foreign key to usersvote_type: "up" or "down"created_at: Vote timestamp
- User submits username, email, and password via
POST /register - Backend validates input (checks for duplicates, sanitizes with
html.escape()) - Password hashed using Werkzeug's
generate_password_hash()(PBKDF2-SHA256) - User record created in database
- Success response returned
- User submits username and password via
POST /login - Backend queries database for user by username
- Password verified using
check_password_hash() - If valid, JWT tokens generated (access + refresh)
- Tokens stored in HTTPOnly cookies
- User data returned in response
- Client connects to WebSocket server (
Flask-SocketIO) - User votes on a post via
POST /vote - Backend updates vote count in database
- Backend emits
vote_updateevent to all connected clients - Clients receive updated vote counts in real-time
- UI updates without page refresh
- User clicks privacy toggle on their post
- Frontend sends
PATCH /privacywithpost_id,is_private,is_anonymous - Backend verifies JWT token (ensures user owns the post)
- Input sanitized and validated
- Post record updated in database
- Success response with new privacy settings
- Post visibility changes:
- Private: Only owner can see
- Anonymous: Username shows as "Anonymous"
- Public: Everyone can see with username
- User clicks upvote/downvote button
- Frontend sends
POST /votewithpost_idandvote_type - Backend checks if user already voted (unique constraint)
- If exists, update vote type; if new, create vote record
- Vote counts recalculated from database
- WebSocket event emitted with new counts
- All clients receive and display updated counts
The frontend is a modern Next.js 15 application with the App Router and React 19. It provides a responsive UI for interacting with the backend API.
frontend-main/
βββ src/
β βββ app/
β βββ page.js # Home page (post feed)
β βββ layout.js # Root layout with Header
β βββ middleware.js # Auth middleware
β βββ login/page.js # Login page
β βββ register/page.js # Registration page
β βββ create/page.js # Create post page
β βββ profile/page.js # User profile
β βββ change-password/page.js # Password change
β βββ components/
β β βββ Header.js # Navigation header
β β βββ PostList.js # Post grid display
β β βββ BodyWithImage.js # Login/Register layouts
β βββ lib/
β βββ api.js # Axios API client
βββ helm/frontend/ # Frontend Helm chart
βββ deploy/ # Ansible deployment
βββ k8s-staging/ # K8s manifests
βββ Dockerfile # Production container build
βββ next.config.mjs # Next.js configuration
βββ tailwind.config.mjs # Tailwind CSS config
βββ package.json # Dependencies
βββ README.md # Frontend-specific docs
- Authentication Pages: Login, Register, Profile
- Post Management: Create, View, Vote
- Real-time Updates: Socket.IO integration for live voting
- Responsive Design: Mobile-friendly with Tailwind CSS
- Image Upload: Preview and upload images with posts
- Privacy Controls: Toggle public/private/anonymous posts
- Next.js 15.5.9 with App Router and Turbopack
- React 19 for UI components
- Tailwind CSS for styling
- Axios for HTTP requests
- Socket.io-client for real-time updates
- js-cookie for cookie management
See frontend-main/README.md for detailed frontend documentation including:
- Component structure
- API integration
- State management
- Styling guidelines
- Deployment to Kubernetes
This project was primarily developed manually with AI tools used for guidance and reference:
Phase 1-2 (Development):
- Guidance on code structure and patterns
- Debugging assistance when troubleshooting issues
- Reference for CI/CD pipeline syntax
Phase 3-4 (Operations & Production Readiness):
AI usage in the operations phase is limited due to environment-specific configurations and credentials. AI was used for:
-
Generating yaml template and adjust manually
-
Debugging and troubleshooting issues
-
Generating ci-cd step with giving instruction
-
Troubleshooting CI/CD pipeline configuration
-
Command syntax reference (kubectl, helm commands)
-
YAML template (manually adapted to project)
-
Troubleshooting error messages
-
for generating Grafana dashboard queries
-
Documentation formatting suggestions
Note: All configurations, deployments, and infrastructure decisions were implemented and tested manually. AI provided guidance on syntax and approaches, but the actual implementation, testing, and verification was done by the team.