Skip to content

risingmin/finance-chatbot

Repository files navigation

Finance Chatbot

AI-powered personal finance assistant with a modern React frontend and Node/Express backend. Get personalized advice on budgeting, saving, debt payoff, and financial goal planning.

✨ Features

  • Clean Modern UI: Notion-like design with sidebar navigation and responsive layout
  • Real-time Chat: Message streaming with typing indicators
  • Local-first: Uses Ollama for local LLM inference (no API keys needed)
  • Flexible LLM: Easily swap to OpenAI, Anthropic, or other providers
  • Production Ready: Deployable to Vercel (frontend) and Render (backend)
  • Full Error Handling: Clear error messages and comprehensive logging

πŸ›  Tech Stack

Frontend:

  • React 18 + Vite
  • Modern CSS with animations
  • Fetch API + Vite proxy for HTTP requests
  • Auto-responsive design (desktop, tablet, mobile)

Backend:

  • Node.js + Express
  • Axios for LLM API calls
  • Comprehensive error middleware
  • Environment-based configuration

LLM:

  • Default: Ollama (local, no quotas)
  • Alternatives: OpenAI, Anthropic, or any OpenAI-compatible API

πŸ“‹ Prerequisites

For Local Development

  1. Node.js 18+

    node --version  # should be v18.x or higher
  2. Ollama (for local LLM)

    • Download from ollama.ai
    • Install and start: ollama serve
    • Pull a model: ollama pull gemma2:2b (2GB) or ollama pull llama3.1:8b (4GB)
    • Verify: curl http://localhost:11434/api/tags
  3. Git (to clone the repo)

πŸš€ Quick Start

Step 1: Clone and Install

git clone <repo-url>
cd finance-chatbot

# Backend dependencies
cd backend
npm install

# Frontend dependencies
cd ../frontend
npm install
cd ..  # Back to project root

Step 2: Start Ollama (Terminal 1)

ollama serve

You should see:

Listening on 127.0.0.1:11434 (listen tcp 127.0.0.1:11434: bind: address already in use)

Step 3: Start Backend (Terminal 2)

cd backend
npm start

You should see:

==================================================
πŸš€ Finance Chatbot Backend Started
==================================================
πŸ“ Port: 8000
πŸ“ Environment: development
πŸ€– LLM Provider: ollama
🌐 CORS Origins: http://localhost:5173

πŸ“‹ Available Endpoints:
   GET  /              - API info & health
   GET  /health        - Health check
   POST /api/chat      - Chat with LLM
   GET  /api/test-llm  - Test LLM connection
   GET  /api/finance   - Finance data
   POST /api/transactions - Log an expense
   GET  /api/transactions  - List expenses
   GET  /api/summary       - Monthly summary
   POST /api/goal          - Save savings goal
   GET  /api/goal          - Fetch savings goal
==================================================

Step 4: Start Frontend (Terminal 3)

cd frontend
npm run dev

You should see:

VITE v4.4.9 ready in 123 ms

➜  Local:   http://localhost:5173/

Step 5: Open in Browser

Visit http://localhost:5173 and start asking questions!

The first message may take 10-30 seconds (Ollama initializes). Subsequent messages are much faster.

πŸ“‘ API Endpoints

POST /api/chat

Send a message and get a response.

Request:

{
  "messages": [
    { "role": "user", "content": "How should I budget my income?" }
  ]
}

Response:

{
  "success": true,
  "reply": "A good starting point is the 50/30/20 rule..."
}

Error Response:

{
  "error": "INVALID_REQUEST",
  "message": "messages array is required and must not be empty",
  "timestamp": "2025-12-07T12:34:56.789Z"
}

GET /health

Check backend health.

Response:

{
  "status": "ok",
  "uptime": "0h 2m",
  "environment": "development",
  "llmProvider": "ollama"
}

GET /api/test-llm

Test LLM connectivity.

Response:

{
  "success": true,
  "message": "LLM is working!",
  "response": "Hello! I am working correctly."
}

POST /api/transactions

Log an expense like "Starbucks 8.50". If heuristics cannot categorize, the LLM picks one of food|transportation|shopping|bills|other.

Request:

{
   "userId": "demo-user",
   "inputText": "Uber to airport 32"
}

Response:

{
   "success": true,
   "transaction": {
      "id": "...",
      "userId": "demo-user",
      "description": "Uber to airport",
      "amount": 32,
      "category": "transportation",
      "timestamp": "2025-12-08T00:00:00.000Z"
   }
}

GET /api/summary

Return monthly totals per user. month accepts either 0- or 1-based values.

Example: /api/summary?userId=demo-user&month=12&year=2025

Response:

{
   "success": true,
   "summary": {
      "total": 123.45,
      "byCategory": {
         "food": 45,
         "transportation": 60,
         "shopping": 0,
         "bills": 18.45,
         "other": 0
      }
   }
}

POST /api/goal

Store a savings goal.

{
   "userId": "demo-user",
   "targetMonthlyAmount": 500,
   "deadline": "2025-12-31"
}

POST /api/goal/suggestions

Ask the LLM to suggest 1–3 practical adjustments based on the current month summary and the saved goal.

{
   "userId": "demo-user"
}

πŸ’Ύ Data persistence

  • Expenses and goals are stored in backend/src/data/finance.json (per user) and are loaded on server start.
  • Data is written asynchronously with a temp-file rename for basic durability.

βš™οΈ Configuration

Backend Environment Variables

Create or edit backend/.env:

# Server
PORT=8000
NODE_ENV=development
FRONTEND_ORIGIN=http://localhost:5173

# LLM Provider (REQUIRED - choose one)
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2b

Available LLM Providers:

Option 1: Ollama (Local, Default)

LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2b        # or llama3.1:8b, mistral, etc

Option 2: OpenAI

LLM_PROVIDER=openai
OPENAI_API_KEY=sk-...
OPENAI_MODEL=gpt-3.5-turbo

Option 3: Anthropic

LLM_PROVIDER=anthropic
ANTHROPIC_API_KEY=sk-ant-...
ANTHROPIC_MODEL=claude-3-sonnet-20240229

Frontend Environment Variables

Create frontend/.env.development (optional):

# Development: Leave empty to use Vite proxy
VITE_API_URL=

# Production (in .env.production):
VITE_API_URL=https://your-api-domain.com

The frontend will:

  • In development: Use Vite proxy (/api β†’ http://localhost:8000)
  • In production: Use the configured VITE_API_URL

πŸ”§ How It Works

Request Flow

Frontend (localhost:5173)
    ↓
POST /api/chat (Vite proxy)
    ↓
Backend (localhost:8000)
    ↓
Ollama (localhost:11434)
    ↓
LLM Model (gemma2:2b, llama3.1:8b, etc)
    ↓
Response β†’ Backend β†’ Frontend β†’ UI

Key Features

Frontend (useChatApi hook):

  • βœ… Uses relative URLs (/api/chat) in development
  • βœ… Vite proxy forwards to backend (no CORS issues)
  • βœ… Uses VITE_API_URL for production API
  • βœ… NO direct calls to Ollama
  • βœ… Comprehensive error messages
  • βœ… 120s timeout for slow LLM responses

Backend:

  • βœ… Express middleware for CORS, logging, error handling
  • βœ… Validates incoming messages
  • βœ… Calls Ollama via OpenAI-compatible API
  • βœ… Returns JSON responses (never HTML)
  • βœ… Detailed console logging for debugging
  • βœ… Development mode includes error details
  • βœ… Production mode sanitizes errors

πŸ› Troubleshooting

Issue: "Unable to reach the assistant" / 404 Error

Check:

  1. Is backend running on port 8000?

    curl http://localhost:8000/health

    Should return: {"status":"ok",...}

  2. Is Vite proxy configured?

    • Check frontend/vite.config.js - should have /api proxy
    • Make sure VITE_API_URL is empty in .env.development
  3. Check browser console (F12 β†’ Console tab)

Fix:

# Restart backend
cd backend
npm start

# Restart frontend dev server
cd frontend
npm run dev

Issue: "Ollama service is not available" / 503 Error

Check:

  1. Is Ollama running?

    curl http://localhost:11434/api/tags
  2. Is the model pulled?

    ollama list

    Should show gemma2:2b or your configured model

Fix:

# Start Ollama
ollama serve

# Pull the model (if needed)
ollama pull gemma2:2b

Issue: "Request took too long" / Timeout

This happens when:

  • Ollama is initializing the model (first request)
  • Your computer is low on memory
  • The model is too large for your system

Solutions:

  1. Try a smaller model: ollama pull gemma2:2b (faster)
  2. Wait 30 seconds and try again
  3. Increase timeout in frontend/src/hooks/useChatApi.js (currently 120s)

Issue: Backend crashes on startup with "Unknown LLM_PROVIDER"

Check:

  1. Does backend/.env exist?
  2. Is LLM_PROVIDER=ollama (or valid provider) set?
  3. Are required env vars set for your provider?

Fix:

cd backend
cat .env  # Check contents

# If missing, create it:
echo "LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://localhost:11434/v1
OLLAMA_MODEL=gemma2:2b" > .env

npm start

πŸ“¦ Deployment

Frontend β†’ Vercel

  1. Push code to GitHub
  2. Create new project on Vercel
  3. Select finance-chatbot repo
  4. Configure build:
    • Root Directory: frontend
    • Build Command: npm run build
    • Output Directory: dist
  5. Add environment variable:
    VITE_API_URL=https://your-backend-api.com
    
  6. Deploy!

Backend β†’ Render

  1. Create new Web Service on Render
  2. Connect GitHub repo
  3. Configure:
    • Root Directory: backend
    • Build Command: npm install
    • Start Command: npm start
    • Plan: Free (or paid)
  4. Add environment variables:
    PORT=8000
    NODE_ENV=production
    FRONTEND_ORIGIN=https://your-vercel-domain.com
    LLM_PROVIDER=openai
    OPENAI_API_KEY=sk-...
    OPENAI_MODEL=gpt-3.5-turbo
    
  5. Deploy!

Note: For production, use OpenAI/Anthropic instead of Ollama. Ollama is local-only.

πŸ“ Project Structure

finance-chatbot/
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”‚   β”œβ”€β”€ ChatInterface.jsx    # Chat UI component
β”‚   β”‚   β”‚   β”œβ”€β”€ HomePage.jsx         # Home page
β”‚   β”‚   β”‚   └── FinanceTools.jsx     # Tools page
β”‚   β”‚   β”œβ”€β”€ hooks/
β”‚   β”‚   β”‚   └── useChatApi.js        # API integration
β”‚   β”‚   β”œβ”€β”€ styles/
β”‚   β”‚   β”‚   β”œβ”€β”€ chat-interface.css   # Chat UI styles
β”‚   β”‚   β”‚   β”œβ”€β”€ app-layout.css       # Layout styles
β”‚   β”‚   β”‚   └── pages.css            # Page styles
β”‚   β”‚   β”œβ”€β”€ App.jsx                  # Main app component
β”‚   β”‚   └── main.jsx                 # Entry point
β”‚   β”œβ”€β”€ vite.config.js               # Vite config with proxy
β”‚   β”œβ”€β”€ .env.production              # Production config
β”‚   └── package.json
β”‚
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ server.js                # Express setup + middleware
β”‚   β”‚   β”œβ”€β”€ api/
β”‚   β”‚   β”‚   └── routes.js            # API routes
β”‚   β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”‚   └── llamaService.js      # LLM API integration
β”‚   β”‚   β”œβ”€β”€ config/
β”‚   β”‚   β”‚   └── llmConfig.js         # Provider config
β”‚   β”‚   └── utils/
β”‚   β”‚       └── validators.js        # Input validation
β”‚   β”œβ”€β”€ .env                         # Environment config
β”‚   β”œβ”€β”€ .env.example                 # Config template
β”‚   └── package.json
β”‚
└── README.md

πŸ” Security

  • βœ… CORS configured (only accept from frontend domain)
  • βœ… Input validation on all routes
  • βœ… No sensitive data in error messages (production)
  • βœ… Environment variables for API keys
  • βœ… Request timeout to prevent DoS
  • βœ… Request body size limits (Express defaults)

πŸ“Š Example Prompts

Try these questions:

  • "How should I budget as a college student?"
  • "I have $3,000 credit card debt. What's my best payoff strategy?"
  • "How much should I invest each month for retirement?"
  • "What's the difference between a 401k and an IRA?"
  • "How can I build an emergency fund?"

🀝 Contributing

Found a bug? Have a feature idea? Open an issue or PR!

πŸ“ License

MIT - Feel free to use this project however you like.

πŸ‘€ Status

βœ… In Development

  • Chat interface
  • Ollama integration
  • Error handling
  • Responsive design
  • Message history
  • User authentication
  • Finance data integration
  • Debt calculator

Questions? Check the Troubleshooting section or open an issue!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •