Skip to content

hashimminhas/reddit-clone-devops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Backend CI Frontend CI

Reddit Clone - Full-Stack Social Platform

A modern, full-stack social media application for creating and sharing posts with real-time voting and interactions. Built with Flask (backend) and Next.js (frontend) deployed on Kubernetes.

🌐 Live Application

πŸ“‹ Table of Contents

🎯 Overview

This project is a Reddit-style post sharing platform with:

  • User authentication and profiles
  • Post creation with image uploads (up to 10MB)
  • Real-time voting system via WebSocket
  • Privacy controls (public/private/anonymous posts)
  • Responsive design with Tailwind CSS
  • Production-ready Kubernetes deployment

πŸ€– AI Assistance Disclosure

This project was developed with assistance from AI tools for:

  • Generating boilerplate code and component structures
  • Creating UI components with Tailwind CSS
  • Debugging and troubleshooting
  • CI/CD pipeline configurations

✨ Features

User Features

  • βœ… User registration and JWT-based authentication
  • βœ… Profile management with password change
  • βœ… Create posts with text and optional images
  • βœ… Upvote/downvote posts with real-time updates
  • βœ… Privacy controls (public/private/anonymous)
  • βœ… Real-time WebSocket notifications
  • βœ… Responsive grid layout

Technical Features

  • βœ… XSS protection with input sanitization
  • βœ… CORS support for cross-origin requests
  • βœ… HttpOnly cookie authentication
  • βœ… Image upload with size validation
  • βœ… Automated CI/CD with GitHub Actions
  • βœ… Kubernetes deployment with Helm
  • βœ… Database high availability with CloudNativePG
  • βœ… Prometheus monitoring integration

πŸ› οΈ Tech Stack

Backend

  • Python: 3.11
  • Framework: Flask 3.x
  • Database: PostgreSQL 17
  • ORM: Flask-SQLAlchemy
  • Authentication: Flask-JWT-Extended
  • Real-time: Flask-SocketIO, Socket.io
  • Testing: pytest with coverage

Frontend

  • Framework: Next.js 15.5.9 (App Router + Turbopack)
  • React: 19
  • Styling: Tailwind CSS
  • HTTP Client: Axios
  • Real-time: Socket.io-client
  • Linting: ESLint, Prettier

DevOps & Infrastructure

  • Containers: Docker
  • Orchestration: Kubernetes (ETAIS Cloud)
  • Package Manager: Helm
  • CI/CD: GitHub Actions
  • Database HA: CloudNativePG
  • Ingress: NGINX Ingress Controller
  • Monitoring: Prometheus, ServiceMonitor
  • Security Scanning: Trivy

πŸ— Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”      HTTPS      β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Browser   β”‚ ◄──────────────► β”‚ Next.js:3000 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”˜
                                        β”‚ Proxy /api/*
                                        β–Ό
                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                              β”‚  Flask API:5000 β”‚
                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                       β”‚
                          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                          β–Ό            β–Ό            β–Ό
                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                   β”‚PostgreSQLβ”‚ β”‚SocketIO  β”‚ β”‚  Uploads β”‚
                   β”‚   :5432  β”‚ β”‚WebSocket β”‚ β”‚ /images/ β”‚
                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Request Flow

  1. Frontend: User interacts with Next.js UI on port 3000
  2. API Proxy: Next.js proxies /api/* requests to backend
  3. Backend: Flask API handles requests on port 5000
  4. Database: PostgreSQL stores users, posts, votes
  5. Real-time: SocketIO broadcasts updates to all clients
  6. Images: Served from backend /image/<filename> endpoint

βš™οΈ Configuration & Constraints

JWT Authentication

  • Access Token Expiration: 15 minutes
  • Refresh Token Expiration: 30 days

File Upload

  • Max File Size: 10 MB
  • Allowed Formats: Images (jpg, png, gif, etc.)
  • Storage: Local filesystem (uploads/ directory)

Database

  • Port: 5434 (PostgreSQL local), 5432 (production)

Security

  • Password Hashing: PBKDF2-SHA256 via Werkzeug (generate_password_hash / check_password_hash)
  • XSS Protection: HTML escape on all user inputs using Python's html.escape()
  • CORS: Enabled for frontend integration with credentials support

πŸ“š API Documentation

Authentication Endpoints

  • POST /register - Register a new user account
  • POST /login - Authenticate user and receive JWT cookies
  • POST /logout - Clear authentication cookies (requires auth)
  • GET /me - Get current authenticated user details (requires auth)
  • POST /refresh - Refresh access token using refresh token cookie
  • POST /change-password - Change user password (requires auth)

Post Management

  • POST /posts - Create a new post with optional image (requires auth)
  • GET /posts - List all accessible posts
  • PATCH /privacy - Update post privacy settings (requires auth)

Voting

  • POST /vote - Vote on a post (upvote/downvote, requires auth)

Media

  • GET /image/<filename> - Retrieve uploaded images

πŸš€ Quick Start (Local Development)

Prerequisites

  • Python 3.11+ (added to PATH)
  • Node.js 18+ and npm
  • Docker Desktop running
  • Git

Quick Commands

# 1. Start Database
docker-compose -f docker-compose.local.yml up -d

# 2. Start Backend (Terminal 1)
python -m venv venv
.\venv\Scripts\Activate.ps1
pip install -r requirements.txt -r requirements-dev.txt
$env:DATABASE_URL = "postgresql+psycopg://postgres:postgres@localhost:5434/mydb"
$env:JWT_SECRET_KEY = "dev-secret-key"
$env:FRONTEND_ORIGIN = "http://localhost:3000"
python app/app.py

# 3. Start Frontend (Terminal 2)
cd frontend-main
npm install
$env:NEXT_PUBLIC_API_BASE = ""
npm run dev

Access the app: http://localhost:3000

For detailed setup instructions, see Complete Local Development Setup below.


βš™οΈ Complete Local Development Setup

This project consists of two separate applications that need to run together:

  1. Backend API (Flask on port 5000)
  2. Frontend (Next.js on port 3000)

Step 1: Start the Database

# Navigate to project root
cd "path\to\Reddit clone"

# Start PostgreSQL and pgAdmin using Docker
docker-compose -f docker-compose.local.yml up -d

# Verify containers are running
docker ps
# You should see: posts-db-local (port 5434) and pgadmin-local (port 5050)

Database Details:

  • PostgreSQL: localhost:5434
  • pgAdmin: http://localhost:5050
    • Email: admin@local.com
    • Password: admin

Step 2: Setup and Run Backend API

Terminal 1 - Backend:

# Navigate to project root
cd "path\to\Reddit clone"

# Create virtual environment (first time only)
python -m venv venv

# Activate virtual environment
.\venv\Scripts\Activate.ps1
# Note: If you get execution policy error, run:
# Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

# Install dependencies (first time only)
pip install -r requirements.txt
pip install -r requirements-dev.txt

# Set environment variables
$env:DATABASE_URL = "postgresql+psycopg://postgres:postgres@localhost:5434/mydb"
$env:JWT_SECRET_KEY = "dev-secret-key-change-in-production"
$env:FRONTEND_ORIGIN = "http://localhost:3000"
$env:FLASK_ENV = "development"

# Run the backend server
python app/app.py

Expected Output:

* Restarting with stat
* Debugger is active!
* Debugger PIN: xxx-xxx-xxx
(xxxxx) wsgi starting up on http://127.0.0.1:5000

Backend API is now running at: http://localhost:5000


Step 3: Setup and Run Frontend

Terminal 2 - Frontend (open a new terminal):

# Navigate to frontend directory
cd "path\to\Reddit clone\frontend-main"

# Install dependencies (first time only)
npm install

# Set environment variable (empty string tells Next.js to use proxy)
$env:NEXT_PUBLIC_API_BASE = ""

# Run the frontend server
npm run dev

Expected Output:

β–² Next.js 15.5.9 (Turbopack)
- Local:        http://localhost:3000
- Network:      http://172.31.x.x:3000
βœ“ Ready in 1.5s

Frontend is now running at: http://localhost:3000


Step 4: Verify Everything Works

  1. Open browser: Navigate to http://localhost:3000
  2. Register a new account: Click "Sign Up"
  3. Create a post: Go to "Create Post"
  4. Test voting: Upvote/downvote should work in real-time
  5. Check backend: API calls should show 200 status codes (not 404)

πŸ›‘ Stopping the Services

Stop Frontend:

  • Press Ctrl+C in the frontend terminal

Stop Backend:

  • Press Ctrl+C in the backend terminal

Stop Database:

docker-compose -f docker-compose.local.yml down

πŸ”§ Troubleshooting

Issue: Port already in use

# Check what's using port 5000
netstat -ano | findstr :5000
# Kill the process if needed
taskkill /PID <process_id> /F

Issue: Docker not starting

  • Make sure Docker Desktop is running
  • Check system tray for Docker icon
  • Wait 1-2 minutes for Docker to fully start

Issue: 404 errors on /api/* endpoints

  • Verify backend is running on port 5000
  • Verify frontend has NEXT_PUBLIC_API_BASE = ""
  • Restart frontend after setting environment variable

Issue: CORS errors

  • Verify FRONTEND_ORIGIN = "http://localhost:3000" is set in backend
  • Restart backend after setting environment variable

Issue: Database connection failure

# Check if database container is running
docker ps | findstr posts-db-local

# View database logs
docker logs posts-db-local

πŸ“ Quick Start Script (Optional)

Save this as start-dev.ps1 in project root:

# Start database
Write-Host "Starting database..." -ForegroundColor Green
docker-compose -f docker-compose.local.yml up -d
Start-Sleep -Seconds 3

# Start backend in new window
Write-Host "Starting backend..." -ForegroundColor Green
Start-Process powershell -ArgumentList "-NoExit", "-Command", @"
cd '$PWD'
.\venv\Scripts\Activate.ps1
`$env:DATABASE_URL = 'postgresql+psycopg://postgres:postgres@localhost:5434/mydb'
`$env:JWT_SECRET_KEY = 'dev-secret-key'
`$env:FRONTEND_ORIGIN = 'http://localhost:3000'
`$env:FLASK_ENV = 'development'
python app/app.py
"@

Start-Sleep -Seconds 5

# Start frontend in new window
Write-Host "Starting frontend..." -ForegroundColor Green
Start-Process powershell -ArgumentList "-NoExit", "-Command", @"
cd '$PWD\frontend-main'
`$env:NEXT_PUBLIC_API_BASE = ''
npm run dev
"@

Write-Host "`nAll services starting!" -ForegroundColor Cyan
Write-Host "Frontend: http://localhost:3000" -ForegroundColor Yellow
Write-Host "Backend:  http://localhost:5000" -ForegroundColor Yellow

Run with: .\start-dev.ps1


πŸ”„ CI/CD Pipeline

This project uses GitHub Actions for continuous integration and deployment.

Workflows

  • Backend CI/CD (.github/workflows/backend.yml)

    • Lint (Ruff), Test (pytest + coverage), Build (Docker), Security Scan (Trivy)
    • Auto-deploy to staging on main branch
    • Manual approval required for production
  • Frontend CI/CD (.github/workflows/frontend.yml)

    • Lint (ESLint + Prettier), Build (Docker), Security Scan (Trivy)
    • Auto-deploy to staging on main branch
    • Manual approval required for production

Setup GitHub Actions

  1. Configure Secrets (Settings β†’ Secrets and variables β†’ Actions):

    • KUBE_SERVER: Kubernetes API URL (e.g., https://rancher.devops.cs.ut.ee/k8s/clusters/c-m-xxxxx)
    • KUBE_TOKEN: Service account token for staging namespace
    • KUBE_TOKEN_PROD: Service account token for production namespace
  2. Push to trigger workflows:

    git add .
    git commit -m "Update application"
    git push origin main
  3. Monitor builds: Go to Actions tab in GitHub

  4. Deploy to production:

    • Go to Actions β†’ Select successful workflow β†’ Click "Review deployments" β†’ Approve

For detailed CI/CD setup, see CI_CD_SETUP.md.


πŸš€ Deployment

Verifying Kubernetes Connection

Check if your pods are still running from previous deployments:

# View staging pods
kubectl get pods -n power-ranger-staging

# View production pods
kubectl get pods -n power-ranger-prod

# Check deployment status
kubectl get deployments -n power-ranger-staging

For detailed verification steps, see KUBERNETES_CHECK.md.


βš™οΈ Production Deployment

This project uses Kubernetes with Helm charts for production deployment.

Prerequisites

  • Access to Kubernetes cluster (ETAIS)
  • kubectl configured with cluster access
  • helm installed
  • Service account tokens (KUBE_TOKEN)

Clone the Repository

git clone https://github.com/yourusername/reddit-clone.git
cd reddit-clone

πŸš€ Phase 4: Production Readiness

πŸ“¦Helm Templating & Safe Deployments

Why Helm?

We chose Helm over Kustomize for templating because:

  • Native support for atomic rollbacks via --atomic flag
  • Single command deploys with environment-specific values files
  • Clean separation of configuration (values.yaml) from templates
  • Built-in release management and versioning

Helm Chart Structure

helm/posts/
β”œβ”€β”€ Chart.yaml              # Chart metadata (name, version)
β”œβ”€β”€ values.yaml             # Default values
β”œβ”€β”€ values-staging.yaml     # Staging-specific overrides
β”œβ”€β”€ values-prod.yaml        # Production-specific overrides
└── templates/
    β”œβ”€β”€ deployment.yaml     # App deployment with probes
    β”œβ”€β”€ service.yaml        # ClusterIP service
    β”œβ”€β”€ ingress.yaml        # NGINX ingress with TLS
    β”œβ”€β”€ configmap.yaml      # Non-secret configuration
    β”œβ”€β”€ secret.yaml         # Database & JWT secrets
    β”œβ”€β”€ hpa.yaml            # Horizontal Pod Autoscaler
    β”œβ”€β”€ servicemonitor.yaml # Prometheus scraping config
    β”œβ”€β”€ networkpolicy.yaml  # Network security rules
    └── cnpg-cluster.yaml   # CloudNativePG database

Deploying Different Configurations

# Staging deployment
helm upgrade --install posts ./helm/posts \
  -f ./helm/posts/values-staging.yaml \
  --namespace power-ranger-staging

# Production deployment
helm upgrade --install posts ./helm/posts \
  -f ./helm/posts/values-prod.yaml \
  --namespace power-ranger-prod

πŸ” Secrets Management

Strategy: Secrets are never committed to Git. They are stored securely in GitHub Secrets and injected at deploy-time.

Secret Storage Injection Method
DB_USER Helm values file --set-string secrets.dbUser
DB_PASS Helm values file --set-string secrets.dbPassword
DB_NAME Helm values file --set-string secrets.dbName
JWT_SECRET_KEY Helm values file --set-string secrets.jwtSecretKey
KUBE_TOKEN GitHub Secret (protected) Kubeconfig generation
KUBE_TOKEN_PROD GitHub Secret (protected) Production kubeconfig
KUBE_SERVER GitHub Secret Kubernetes API URL

Why this approach?

  • Secrets never appear in Git history
  • Different secrets per environment (staging vs prod)
  • GitHub environment protection rules for production
  • Minimal exposure window (only during pipeline execution)

⚑ Atomic Rollback

Every deployment uses --atomic flag which ensures:

  • If deployment fails health checks β†’ automatic rollback to previous version
  • No manual intervention required
  • Zero downtime during failed deployments
helm upgrade --install posts ./helm/posts \
  --atomic \           # ← Rollback on failure
  --timeout 5m         # ← Wait up to 5 minutes for health

Evidence in pipeline: Failed deployments show "ROLLED BACK" in GitHub Actions logs.


🏭 Production Deployment & Autoscaling

Production vs Staging Deployment

Aspect Staging Production
Namespace power-ranger-staging power-ranger-prod
URL power-ranger-staging.kubernetes.devops.cs.ut.ee power-ranger-prod.kubernetes.devops.cs.ut.ee
Trigger Automatic on main branch Manual approval required
Replicas 2 2-5 (HPA controlled)
Database CNPG 2 instances CNPG 2 instances (HA)
Token KUBE_TOKEN KUBE_TOKEN_PROD

CI/CD Pipeline Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Test   │───▢│  Build  │───▢│  Scan   │───▢│ Deploy Staging  β”‚
β”‚ (lint)  β”‚    β”‚(buildah)β”‚    β”‚(trivy)  β”‚    β”‚   (automatic)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                      β”‚
                                                      β–Ό
                                             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                             β”‚ Deploy Prod     β”‚
                                             β”‚ (manual trigger)β”‚
                                             β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ“ˆ Horizontal Pod Autoscaler (HPA)

What: Automatically scales pods based on CPU utilization.

Configuration:

autoscaling:
  enabled: true
  minReplicas: 2        # Never go below 2 pods
  maxReplicas: 5        # Scale up to 5 pods max
  targetCPUUtilizationPercentage: 50  # Scale when CPU > 50%

How it works:

  1. Metrics Server collects CPU usage from pods
  2. HPA compares current usage vs target (50%)
  3. If usage > 50% β†’ add pods (up to maxReplicas)
  4. If usage < 50% β†’ remove pods (down to minReplicas)

Viewing HPA status:

kubectl get hpa -n power-ranger-prod
# Shows: TARGETS (current/target), REPLICAS, AGE

πŸ”„ Topology Spread Constraints

What: Ensures pods are distributed across different Kubernetes nodes.

Why: If one node fails, the application stays online because pods exist on other nodes.

Configuration:

topologySpreadConstraints:
  - maxSkew: 1                          # Max difference between nodes
    topologyKey: kubernetes.io/hostname  # Spread across nodes
    whenUnsatisfiable: ScheduleAnyway   # Best effort if can't spread
    labelSelector:
      matchLabels:
        app: posts-api

Verification:

kubectl get pods -n power-ranger-prod -o wide
# Shows pods running on DIFFERENT nodes

πŸ›‘οΈ Security Scanning (Trivy)

What: Scans container images for known vulnerabilities (CVEs).

When: After every image build, before deployment.

Configuration in .gitlab-ci.yml:

trivy_image_scan:
  stage: scan
  image: aquasec/trivy:0.53.0
  script:
    - trivy image --severity HIGH,CRITICAL --exit-code 1 "$IMAGE"

Behavior:

  • Scans for HIGH and CRITICAL vulnerabilities
  • --exit-code 1 β†’ Pipeline fails if vulnerabilities found
  • Prevents deploying insecure images to production

πŸ“Š Production Grafana Dashboard

We have two production dashboards providing comprehensive visibility into application health and resources.

Dashboard 1: Posts API - Production

Panel Purpose Metric
Pods Ready / Not Ready Chaos engineering visibility Shows pod state changes over time
Service Up Health at a glance Indicates if service is responding
Active Pods Current replica count Number of running pods
Pod Restarts Container crash tracking changes(kube_pod_status_phase{phase="Running"}[1h])
Request Latency (p95 & p50) Golden Signal: Latency Response time percentiles
Response Status Codes Traffic breakdown 200, 201, 401, 404 responses
Request Rate by Endpoint Golden Signal: Traffic Requests/sec per route
Error Count by Status Code Golden Signal: Errors 4xx and 5xx errors
Total API Requests Traffic volume Cumulative request count
HPA max replicas Autoscaling visibility Shows max=5
HPA min replicas Autoscaling visibility Shows min=2
Backend Pod Memory/CPU Golden Signal: Saturation Resource utilization

Dashboard 2: Frontend & Backend Resources Production

Panel Purpose Metric
All pod status Complete pod overview All pods with state history
Running DB Database health CNPG instance count (2)
DB Volume Usage Storage saturation Percentage of disk used
Frontend Pod Memory Frontend resources Memory per frontend pod
Frontend Pod CPU Frontend resources CPU cores per frontend pod
Backend Pod Memory Backend resources Memory per backend pod
Backend Pod CPU Backend resources CPU cores per backend pod
Posts total Application data Count of posts in database

Four Golden Signals Coverage

Signal Panel Location
Traffic Request Rate, Request Rate by Endpoint Posts API Dashboard
Latency Request Latency (p95 & p50) Posts API Dashboard
Errors Error Count by Status Code, Response Status Codes Posts API Dashboard
Saturation Pod CPU, Pod Memory, DB Volume Usage Both Dashboards

Chaos Engineering Visibility

When a pod is killed (chaos testing), the dashboard shows:

  • Pods Ready/Not Ready β†’ Dip in ready pods, then recovery
  • Active Pods β†’ Temporary decrease, then back to normal
  • Pod state changes β†’ Visual evidence of pod termination and recreation

Autoscaling Visibility

Panel Shows
HPA min replicas Minimum pods (2) - never scales below
HPA max replicas Maximum pods (5) - ceiling for scaling
Backend Pod CPU CPU usage that triggers scaling decisions

Monitoring Flow:

Request β†’ Flask App β†’ Prometheus Metrics β†’ ServiceMonitor β†’ Prometheus β†’ Grafana

🧩 Statelessness

How Statelessness is Achieved

Component Strategy Location
User/Post Data PostgreSQL (CNPG) Kubernetes cluster
Sessions JWT Tokens (stateless) Client-side cookies
File Uploads Local filesystem* Pod volume

*For full statelessness, file uploads should be moved to MinIO/S3 or a ReadWriteMany volume.

Why Statelessness Matters

  • Pods can be killed/restarted without data loss
  • Horizontal scaling works seamlessly
  • Load balancing distributes traffic evenly
  • Enables chaos engineering (random pod termination)

πŸ” Quick Reference

URLs

  • Staging: https://power-ranger-staging.kubernetes.devops.cs.ut.ee/
  • Production: https://power-ranger-prod.kubernetes.devops.cs.ut.ee/

Health Endpoints

  • Readiness: /readyz - Used by Kubernetes to check if pod can receive traffic
  • Liveness: /healthz - Used by Kubernetes to check if pod should be restarted
  • Metrics: /metrics - Prometheus metrics endpoint

Useful Commands

# Check pod distribution across nodes
kubectl get pods -n power-ranger-prod -o wide

# Check HPA status
kubectl get hpa -n power-ranger-prod

# View pod logs
kubectl logs -n power-ranger-prod -l app=posts-api --tail=100

# Check CNPG database status
kubectl get cluster -n power-ranger-prod

πŸ§ͺ Development

Code Quality

Run the following commands to maintain code quality:

ruff format  # Format code
ruff check   # Lint code

Testing

Run tests using pytest:

pytest -q

οΏ½ Project Structure

reddit-clone/
β”œβ”€β”€ .github/
β”‚   └── workflows/              # GitHub Actions CI/CD
β”‚       β”œβ”€β”€ backend.yml         # Backend pipeline
β”‚       └── frontend.yml        # Frontend pipeline
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ app.py                  # Main Flask application
β”‚   β”œβ”€β”€ wsgi.py                 # WSGI entry point
β”‚   └── tests/                  # Backend unit & API tests
β”‚       └── test_app.py
β”œβ”€β”€ frontend-main/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   └── app/                # Next.js app router
β”‚   β”‚       β”œβ”€β”€ page.js         # Home page
β”‚   β”‚       β”œβ”€β”€ layout.js       # App layout
β”‚   β”‚       β”œβ”€β”€ login/          # Login page
β”‚   β”‚       β”œβ”€β”€ register/       # Register page
β”‚   β”‚       β”œβ”€β”€ create/         # Create post page
β”‚   β”‚       β”œβ”€β”€ profile/        # Profile page
β”‚   β”‚       β”œβ”€β”€ components/     # React components
β”‚   β”‚       └── lib/            # API client & utilities
β”‚   β”œβ”€β”€ helm/frontend/          # Frontend Helm chart
β”‚   β”œβ”€β”€ Dockerfile              # Frontend container build
β”‚   └── package.json
β”œβ”€β”€ ansible/
β”‚   β”œβ”€β”€ ansible.cfg
β”‚   β”œβ”€β”€ inventory
β”‚   β”œβ”€β”€ playbooks/
β”‚   └── templates/
β”œβ”€β”€ helm/posts/                 # Backend Helm chart
β”‚   β”œβ”€β”€ Chart.yaml
β”‚   β”œβ”€β”€ values.yaml
β”‚   β”œβ”€β”€ values-staging.yaml
β”‚   β”œβ”€β”€ values-prod.yaml
β”‚   └── templates/
β”‚       β”œβ”€β”€ deployment.yaml
β”‚       β”œβ”€β”€ service.yaml
β”‚       β”œβ”€β”€ ingress.yaml
β”‚       β”œβ”€β”€ configmap.yaml
β”‚       β”œβ”€β”€ secret.yaml
β”‚       β”œβ”€β”€ hpa.yaml
β”‚       β”œβ”€β”€ servicemonitor.yaml
β”‚       β”œβ”€β”€ networkpolicy.yaml
β”‚       └── cnpg-cluster.yaml
β”œβ”€β”€ k8s-staging/                # Legacy manual K8s files
β”‚   β”œβ”€β”€ frontend-deployment.yaml
β”‚   β”œβ”€β”€ frontend-service.yaml
β”‚   └── frontend-ingress.yaml
β”œβ”€β”€ k8s-prod/                   # Production K8s RBAC
β”‚   └── cicd-rbac.yaml
β”œβ”€β”€ docker-compose.yml          # Production compose
β”œβ”€β”€ docker-compose.local.yml    # Local development stack
β”œβ”€β”€ Dockerfile                  # Backend container build
β”œβ”€β”€ migrate_db.py               # Database migration script
β”œβ”€β”€ requirements.txt            # Python dependencies
β”œβ”€β”€ requirements-dev.txt        # Development dependencies
β”œβ”€β”€ pyproject.toml              # Ruff & tooling config
β”œβ”€β”€ CI_CD_SETUP.md              # GitHub Actions setup guide
β”œβ”€β”€ KUBERNETES_CHECK.md         # K8s verification guide
└── README.md                   # This file

πŸ’Ύ Database Schema

Users

  • id: Primary key
  • username: Unique username
  • email: Unique email address
  • password_hash: Hashed password
  • created_at: Registration timestamp

Posts

  • id: Primary key
  • title: Post title
  • description: Post content
  • image_url: Optional image path
  • user_id: Foreign key to users
  • is_private: Privacy flag
  • created_at: Creation timestamp

Votes

  • id: Primary key
  • post_id: Foreign key to posts
  • user_id: Foreign key to users
  • vote_type: "up" or "down"
  • created_at: Vote timestamp

πŸ”„ Feature Logic Flows

User Registration Flow

  1. User submits username, email, and password via POST /register
  2. Backend validates input (checks for duplicates, sanitizes with html.escape())
  3. Password hashed using Werkzeug's generate_password_hash() (PBKDF2-SHA256)
  4. User record created in database
  5. Success response returned

Login Flow

  1. User submits username and password via POST /login
  2. Backend queries database for user by username
  3. Password verified using check_password_hash()
  4. If valid, JWT tokens generated (access + refresh)
  5. Tokens stored in HTTPOnly cookies
  6. User data returned in response

WebSocket Real-time Updates

  1. Client connects to WebSocket server (Flask-SocketIO)
  2. User votes on a post via POST /vote
  3. Backend updates vote count in database
  4. Backend emits vote_update event to all connected clients
  5. Clients receive updated vote counts in real-time
  6. UI updates without page refresh

Toggle Privacy Flow

  1. User clicks privacy toggle on their post
  2. Frontend sends PATCH /privacy with post_id, is_private, is_anonymous
  3. Backend verifies JWT token (ensures user owns the post)
  4. Input sanitized and validated
  5. Post record updated in database
  6. Success response with new privacy settings
  7. Post visibility changes:
    • Private: Only owner can see
    • Anonymous: Username shows as "Anonymous"
    • Public: Everyone can see with username

Voting Flow

  1. User clicks upvote/downvote button
  2. Frontend sends POST /vote with post_id and vote_type
  3. Backend checks if user already voted (unique constraint)
  4. If exists, update vote type; if new, create vote record
  5. Vote counts recalculated from database
  6. WebSocket event emitted with new counts
  7. All clients receive and display updated counts

🎨 Frontend

The frontend is a modern Next.js 15 application with the App Router and React 19. It provides a responsive UI for interacting with the backend API.

Frontend Project Structure

frontend-main/
β”œβ”€β”€ src/
β”‚   └── app/
β”‚       β”œβ”€β”€ page.js             # Home page (post feed)
β”‚       β”œβ”€β”€ layout.js           # Root layout with Header
β”‚       β”œβ”€β”€ middleware.js       # Auth middleware
β”‚       β”œβ”€β”€ login/page.js       # Login page
β”‚       β”œβ”€β”€ register/page.js    # Registration page
β”‚       β”œβ”€β”€ create/page.js      # Create post page
β”‚       β”œβ”€β”€ profile/page.js     # User profile
β”‚       β”œβ”€β”€ change-password/page.js  # Password change
β”‚       β”œβ”€β”€ components/
β”‚       β”‚   β”œβ”€β”€ Header.js       # Navigation header
β”‚       β”‚   β”œβ”€β”€ PostList.js     # Post grid display
β”‚       β”‚   └── BodyWithImage.js # Login/Register layouts
β”‚       └── lib/
β”‚           └── api.js          # Axios API client
β”œβ”€β”€ helm/frontend/              # Frontend Helm chart
β”œβ”€β”€ deploy/                     # Ansible deployment
β”œβ”€β”€ k8s-staging/                # K8s manifests
β”œβ”€β”€ Dockerfile                  # Production container build
β”œβ”€β”€ next.config.mjs             # Next.js configuration
β”œβ”€β”€ tailwind.config.mjs         # Tailwind CSS config
β”œβ”€β”€ package.json                # Dependencies
└── README.md                   # Frontend-specific docs

Frontend Features

  • Authentication Pages: Login, Register, Profile
  • Post Management: Create, View, Vote
  • Real-time Updates: Socket.IO integration for live voting
  • Responsive Design: Mobile-friendly with Tailwind CSS
  • Image Upload: Preview and upload images with posts
  • Privacy Controls: Toggle public/private/anonymous posts

Frontend Tech Stack

  • Next.js 15.5.9 with App Router and Turbopack
  • React 19 for UI components
  • Tailwind CSS for styling
  • Axios for HTTP requests
  • Socket.io-client for real-time updates
  • js-cookie for cookie management

Frontend Development

See frontend-main/README.md for detailed frontend documentation including:

  • Component structure
  • API integration
  • State management
  • Styling guidelines
  • Deployment to Kubernetes

πŸ€– AI Assistance Disclosure

This project was primarily developed manually with AI tools used for guidance and reference:

Phase 1-2 (Development):

  • Guidance on code structure and patterns
  • Debugging assistance when troubleshooting issues
  • Reference for CI/CD pipeline syntax

Phase 3-4 (Operations & Production Readiness):

AI usage in the operations phase is limited due to environment-specific configurations and credentials. AI was used for:

  • Generating yaml template and adjust manually

  • Debugging and troubleshooting issues

  • Generating ci-cd step with giving instruction

  • Troubleshooting CI/CD pipeline configuration

  • Command syntax reference (kubectl, helm commands)

  • YAML template (manually adapted to project)

  • Troubleshooting error messages

  • for generating Grafana dashboard queries

  • Documentation formatting suggestions

Note: All configurations, deployments, and infrastructure decisions were implemented and tested manually. AI provided guidance on syntax and approaches, but the actual implementation, testing, and verification was done by the team.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors