All‑in‑one AI Studio that creates apps, games, WhatsApp bots, viral avatars and video content in seconds with full Hebrew & English support.
- Full‑stack monorepo (React frontend + Node/Express backend)
- AI generation pipelines for apps/games/media
- Realtime via Socket.io
- Viral avatars: TTS + (optional) local Wav2Lip lip‑sync service
- Production configs included (Railway/Render/Netlify/Vercel)
This is a monorepo:
- Frontend (root): React 18 + CRACO + TypeScript UI
- Backend (
peren55-backend/): Node.js + Express API (Prisma, Socket.io, AI orchestration) - Local AI services (
peren55-backend/ai/): Python FastAPI services (Talking Head / Wav2Lip) + vendor models
- AI app/game generation
- Multi-agent generation pipeline (
peren55-backend/src/agents,peren55-backend/src/ai) - Templates + enrichment + validations
- Multi-agent generation pipeline (
- Viral Avatars (Talking Head / Lip Sync)
- TTS (Edge‑TTS / fallback)
- Local Wav2Lip FastAPI service for lip-sync video generation
- Media tools
- Image/video tooling and export workflows
- Auth & payments
- OAuth (Google/GitHub)
- JWT sessions
- PayPal / Stripe integration
- Realtime
- Socket.io events + collaboration flows
- Intent + language routing: quick HE/EN detection + fallback to LLM intent detection (
peren55-backend/src/services/builderOrchestrator.ts). - Spec generation: convert prompt -> structured app spec (
spec_generation). - IR + scaffolding: generate intermediate representation + scaffold a runnable project (
ir_generation,scaffold). - Agent specialization: category agents (App/Game/Media/Tool/etc.) live in
peren55-backend/src/agents/. - Realtime progress: every step emits progress events (job state + % + step) over Socket.io to keep the UI responsive.
- React 18, TypeScript
- CRACO (Create React App customization)
- TailwindCSS
- MUI, Mantine
- Three.js (
@react-three/fiber,@react-three/drei) for 3D - i18next for Hebrew/English
- Node.js 18+, Express
- Prisma (Postgres)
- Socket.io
- Cloudinary (media hosting)
- FFmpeg (media processing)
graph TD
A[Frontend React TypeScript] -->|HTTP API| B[Backend Node Express]
A -->|Realtime| S[Socketio]
B -->|Orchestration| O[Builder Orchestrator Agents]
O --> L[LLM Providers]
B --> DB[Postgres Prisma]
B --> M[Media Cloudinary FFmpeg]
B --> W[Optional FastAPI Wav2Lip]
S --> B
- Python 3.10+
- FastAPI + Uvicorn
- Wav2Lip (vendorized under
peren55-backend/ai/talking_head/vendor/Wav2Lip) - edge-tts
- Node.js 18+
- Python 3.10+ (only if you want local Wav2Lip / Talking Head)
- FFmpeg (recommended)
npm install
npm --prefix ./peren55-backend installCreate these files:
./.envfrom./.env.example./peren55-backend/.envfrom./peren55-backend/.env.example
This starts:
- Frontend on
http://localhost:3001 - Backend on
http://localhost:3002 - Wav2Lip FastAPI on
http://127.0.0.1:8001 - Prisma Studio on
http://localhost:5555
npm run dev:windowsnpm run dev:windows– start frontend + backend + wav2lip + prisma studionpm run dev:frontend:windows– start frontend onlynpm run server:windows– start backend onlynpm --prefix ./peren55-backend run db:studio– Prisma Studio
- Frontend: http://localhost:3001
- Backend health: http://localhost:3002/api/health
- Backend docs: http://localhost:3002/api-docs
- Wav2Lip health: http://127.0.0.1:8001/api/health
- Prisma Studio: http://localhost:5555
.
├─ src/ # Frontend React app
├─ public/ # Static assets
├─ peren55-backend/
│ ├─ src/ # Express API + services
│ ├─ prisma/ # Prisma schema/migrations
│ ├─ ai/
│ │ └─ talking_head/ # FastAPI (Wav2Lip) service
│ └─ temp/ # Runtime temp files (ignored)
└─ render.yaml / railway.toml # Deployment configs
- Backend entrypoint:
peren55-backend/src/server.js - Self-hosted viral video pipeline:
peren55-backend/src/routes/selfHostedVideo.js - Local Wav2Lip service (FastAPI):
peren55-backend/ai/talking_head/main.py - Wav2Lip processing implementation:
peren55-backend/ai/talking_head/service_wav2lip.py - Frontend hub:
src/components/MediaToolsHub.tsx
GET /api/health– backend health checkPOST /api/self-hosted/create-audio– TTS generationPOST /api/self-hosted/create-video– avatar video creation via Wav2LipPOST /api/self-hosted/generate-viral-video– full pipeline (Text -> TTS -> LipSync)
This repo contains configs for multiple providers:
- Railway:
railway.toml+nixpacks.toml - Render:
render.yaml+render-build.sh - Netlify:
netlify.toml(frontend) - Vercel:
vercel.json(frontend)
Recommended approach:
- Frontend: Netlify/Vercel (static)
- Backend: Railway/Render (Node)
- Viral avatars (local Wav2Lip): run on the same machine/instance as backend when using the local FastAPI service
- Realtime build progress: long-running AI jobs need continuous UI feedback.
- Solution: job orchestration emits structured progress events (step + cumulative progress) via Socket.io so the frontend can show status in real time.
- Mixed runtimes (Node + Python): keeping media pipelines stable across environments.
- Solution: isolate optional Wav2Lip/Talking-Head behind a FastAPI service boundary and treat it as an external dependency with healthchecks.
- Never commit
.envfiles. - Use
.env.examplefiles and configure real secrets in your hosting provider’s environment variables.
GitHub Actions workflows live in .github/workflows/.
MIT (see LICENSE).
Contact