AI-powered personal finance assistant with a modern React frontend and Node/Express backend. Get personalized advice on budgeting, saving, debt payoff, and financial goal planning.
- Clean Modern UI: Notion-like design with sidebar navigation and responsive layout
- Real-time Chat: Message streaming with typing indicators
- Local-first: Uses Ollama for local LLM inference (no API keys needed)
- Flexible LLM: Easily swap to OpenAI, Anthropic, or other providers
- Production Ready: Deployable to Vercel (frontend) and Render (backend)
- Full Error Handling: Clear error messages and comprehensive logging
Frontend:
- React 18 + Vite
- Modern CSS with animations
- Fetch API + Vite proxy for HTTP requests
- Auto-responsive design (desktop, tablet, mobile)
Backend:
- Node.js + Express
- Axios for LLM API calls
- Comprehensive error middleware
- Environment-based configuration
LLM:
- Default: Ollama (local, no quotas)
- Alternatives: OpenAI, Anthropic, or any OpenAI-compatible API
-
Node.js 18+
node --version # should be v18.x or higher -
Ollama (for local LLM)
- Download from ollama.ai
- Install and start:
ollama serve - Pull a model:
ollama pull gemma2:2b(2GB) orollama pull llama3.1:8b(4GB) - Verify:
curl http://localhost:11434/api/tags
-
Git (to clone the repo)
git clone <repo-url>
cd finance-chatbot
# Backend dependencies
cd backend
npm install
# Frontend dependencies
cd ../frontend
npm install
cd .. # Back to project rootollama serveYou should see:
Listening on 127.0.0.1:11434 (listen tcp 127.0.0.1:11434: bind: address already in use)
cd backend
npm startYou should see:
==================================================
π Finance Chatbot Backend Started
==================================================
π Port: 8000
π Environment: development
π€ LLM Provider: ollama
π CORS Origins: http://localhost:5173
π Available Endpoints:
GET / - API info & health
GET /health - Health check
POST /api/chat - Chat with LLM
GET /api/test-llm - Test LLM connection
GET /api/finance - Finance data
POST /api/transactions - Log an expense
GET /api/transactions - List expenses
GET /api/summary - Monthly summary
POST /api/goal - Save savings goal
GET /api/goal - Fetch savings goal
==================================================
cd frontend
npm run devYou should see:
VITE v4.4.9 ready in 123 ms
β Local: http://localhost:5173/
Visit http://localhost:5173 and start asking questions!
The first message may take 10-30 seconds (Ollama initializes). Subsequent messages are much faster.
Send a message and get a response.
Request:
{
"messages": [
{ "role": "user", "content": "How should I budget my income?" }
]
}Response:
{
"success": true,
"reply": "A good starting point is the 50/30/20 rule..."
}Error Response:
{
"error": "INVALID_REQUEST",
"message": "messages array is required and must not be empty",
"timestamp": "2025-12-07T12:34:56.789Z"
}Check backend health.
Response:
{
"status": "ok",
"uptime": "0h 2m",
"environment": "development",
"llmProvider": "ollama"
}Test LLM connectivity.
Response:
{
"success": true,
"message": "LLM is working!",
"response": "Hello! I am working correctly."
}Log an expense like "Starbucks 8.50". If heuristics cannot categorize, the LLM picks one of food|transportation|shopping|bills|other.
Request:
{
"userId": "demo-user",
"inputText": "Uber to airport 32"
}Response:
{
"success": true,
"transaction": {
"id": "...",
"userId": "demo-user",
"description": "Uber to airport",
"amount": 32,
"category": "transportation",
"timestamp": "2025-12-08T00:00:00.000Z"
}
}Return monthly totals per user. month accepts either 0- or 1-based values.
Example: /api/summary?userId=demo-user&month=12&year=2025
Response:
{
"success": true,
"summary": {
"total": 123.45,
"byCategory": {
"food": 45,
"transportation": 60,
"shopping": 0,
"bills": 18.45,
"other": 0
}
}
}Store a savings goal.
{
"userId": "demo-user",
"targetMonthlyAmount": 500,
"deadline": "2025-12-31"
}Ask the LLM to suggest 1β3 practical adjustments based on the current month summary and the saved goal.
{
"userId": "demo-user"
}- Expenses and goals are stored in
backend/src/data/finance.json(per user) and are loaded on server start. - Data is written asynchronously with a temp-file rename for basic durability.
Create or edit backend/.env:
# Server
PORT=8000
NODE_ENV=development
FRONTEND_ORIGIN=http://localhost:5173
# LLM Provider (REQUIRED - choose one)
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2bAvailable LLM Providers:
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2b # or llama3.1:8b, mistral, etcLLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-3.5-turboLLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-sonnet-20240229Create frontend/.env.development (optional):
# Development: Leave empty to use Vite proxy
VITE_API_URL=
# Production (in .env.production):
VITE_API_URL=https://your-api-domain.comThe frontend will:
- In development: Use Vite proxy (
/apiβhttp://localhost:8000) - In production: Use the configured
VITE_API_URL
Frontend (localhost:5173)
β
POST /api/chat (Vite proxy)
β
Backend (localhost:8000)
β
Ollama (localhost:11434)
β
LLM Model (gemma2:2b, llama3.1:8b, etc)
β
Response β Backend β Frontend β UI
Frontend (useChatApi hook):
- β
Uses relative URLs (
/api/chat) in development - β Vite proxy forwards to backend (no CORS issues)
- β
Uses
VITE_API_URLfor production API - β NO direct calls to Ollama
- β Comprehensive error messages
- β 120s timeout for slow LLM responses
Backend:
- β Express middleware for CORS, logging, error handling
- β Validates incoming messages
- β Calls Ollama via OpenAI-compatible API
- β Returns JSON responses (never HTML)
- β Detailed console logging for debugging
- β Development mode includes error details
- β Production mode sanitizes errors
Check:
-
Is backend running on port 8000?
curl http://localhost:8000/health
Should return:
{"status":"ok",...} -
Is Vite proxy configured?
- Check
frontend/vite.config.js- should have/apiproxy - Make sure
VITE_API_URLis empty in.env.development
- Check
-
Check browser console (F12 β Console tab)
Fix:
# Restart backend
cd backend
npm start
# Restart frontend dev server
cd frontend
npm run devCheck:
-
Is Ollama running?
curl http://localhost:11434/api/tags
-
Is the model pulled?
ollama list
Should show
gemma2:2bor your configured model
Fix:
# Start Ollama
ollama serve
# Pull the model (if needed)
ollama pull gemma2:2bThis happens when:
- Ollama is initializing the model (first request)
- Your computer is low on memory
- The model is too large for your system
Solutions:
- Try a smaller model:
ollama pull gemma2:2b(faster) - Wait 30 seconds and try again
- Increase timeout in
frontend/src/hooks/useChatApi.js(currently 120s)
Check:
- Does
backend/.envexist? - Is
LLM_PROVIDER=ollama(or valid provider) set? - Are required env vars set for your provider?
Fix:
cd backend
cat .env # Check contents
# If missing, create it:
echo "LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2b" > .env
npm start- Push code to GitHub
- Create new project on Vercel
- Select
finance-chatbotrepo - Configure build:
- Root Directory:
frontend - Build Command:
npm run build - Output Directory:
dist
- Root Directory:
- Add environment variable:
VITE_API_URL=https://your-backend-api.com - Deploy!
- Create new Web Service on Render
- Connect GitHub repo
- Configure:
- Root Directory:
backend - Build Command:
npm install - Start Command:
npm start - Plan: Free (or paid)
- Root Directory:
- Add environment variables:
PORT=8000 NODE_ENV=production FRONTEND_ORIGIN=https://your-vercel-domain.com LLM_PROVIDER=openai OPENAI_API_KEY=sk-... OPENAI_MODEL=gpt-3.5-turbo - Deploy!
Note: For production, use OpenAI/Anthropic instead of Ollama. Ollama is local-only.
finance-chatbot/
βββ frontend/
β βββ src/
β β βββ components/
β β β βββ ChatInterface.jsx # Chat UI component
β β β βββ HomePage.jsx # Home page
β β β βββ FinanceTools.jsx # Tools page
β β βββ hooks/
β β β βββ useChatApi.js # API integration
β β βββ styles/
β β β βββ chat-interface.css # Chat UI styles
β β β βββ app-layout.css # Layout styles
β β β βββ pages.css # Page styles
β β βββ App.jsx # Main app component
β β βββ main.jsx # Entry point
β βββ vite.config.js # Vite config with proxy
β βββ .env.production # Production config
β βββ package.json
β
βββ backend/
β βββ src/
β β βββ server.js # Express setup + middleware
β β βββ api/
β β β βββ routes.js # API routes
β β βββ services/
β β β βββ llamaService.js # LLM API integration
β β βββ config/
β β β βββ llmConfig.js # Provider config
β β βββ utils/
β β βββ validators.js # Input validation
β βββ .env # Environment config
β βββ .env.example # Config template
β βββ package.json
β
βββ README.md
- β CORS configured (only accept from frontend domain)
- β Input validation on all routes
- β No sensitive data in error messages (production)
- β Environment variables for API keys
- β Request timeout to prevent DoS
- β Request body size limits (Express defaults)
Try these questions:
- "How should I budget as a college student?"
- "I have $3,000 credit card debt. What's my best payoff strategy?"
- "How much should I invest each month for retirement?"
- "What's the difference between a 401k and an IRA?"
- "How can I build an emergency fund?"
Found a bug? Have a feature idea? Open an issue or PR!
MIT - Feel free to use this project however you like.
β In Development
- Chat interface
- Ollama integration
- Error handling
- Responsive design
- Message history
- User authentication
- Finance data integration
- Debt calculator
Questions? Check the Troubleshooting section or open an issue!