Skip to content

βš™οΈ Isabella – a full-stack πŸš€ conversational system built on FastAPI ✨ featuring 🎭 ML-powered emotion detection microservices (RoBERTa/Deep Learning) embedded into LLM, 🎀 advanced TTS generation pipeline, 🧠 memory-aware context handling,πŸ“Š rich logs + analytics , and πŸ” a dynamic persona delivering consistent, signature responses.

License

Notifications You must be signed in to change notification settings

H0NEYP0T-466/Isabella

Isabella

GitHub License GitHub Stars GitHub Forks GitHub Issues GitHub Pull Requests Contributions Welcome

Last Commit Commit Activity Repo Size Code Size

Top Language Languages Count

Documentation Open Source Love

πŸ“ About

Isabella is an AI-powered chatbot with a terminal-style UI, featuring emotion detection capabilities. Built with React + TypeScript frontend and FastAPI backend, powered by LongCat API and advanced ML models for emotion analysis.

πŸ”— Quick Links

πŸ“‘ Table of Contents

✨ Features

  • πŸ€– AI-Powered Chat: Leverages LongCat API for intelligent conversations
  • 🎨 Terminal Aesthetic: Black background, green text, monospace font for authentic CLI feel
  • 🧠 Thinking Mode Toggle:
    • ON: Uses LongCat-Thinker model (deeper reasoning)
    • OFF: Uses LongCat-Flash-Chat model (faster responses)
  • 😊 Emotion Detection: Advanced ML-based emotion analysis using PyTorch and Transformers
  • πŸ’Ύ MongoDB Integration: Persistent chat history storage
  • πŸ“œ Chat History: Loads last 50 messages on startup
  • πŸ”„ Context Window: Sends last 10 messages to AI for conversation continuity
  • πŸ“Š Comprehensive Logging: Detailed server-side logs for all operations
  • πŸ“œ Auto-scroll: Chat window automatically scrolls to show new messages
  • πŸ”Š Text-to-Speech: AI responses spoken using Piper TTS (local, offline)
  • ⚑ Single-page Application: No routing, streamlined UX
  • πŸ”’ Type-safe Implementation: Full TypeScript for frontend reliability

πŸ›  Tech Stack

Languages

TypeScript Python JavaScript

Frameworks & Libraries

React FastAPI Vite PyTorch

Databases

MongoDB

DevOps / CI / Tools

ESLint npm Git

πŸ“¦ Dependencies & Packages

Frontend Dependencies

Runtime Dependencies

axios react react-dom react-markdown

  • axios ^1.13.2 - Promise-based HTTP client for API requests
  • react ^19.2.0 - Core React library
  • react-dom ^19.2.0 - React DOM rendering
  • react-markdown ^10.1.0 - Markdown rendering in React
Dev/Build/Test Dependencies

@eslint/js @types/node @types/react @types/react-dom @vitejs/plugin-react eslint eslint-plugin-react-hooks eslint-plugin-react-refresh globals typescript typescript-eslint vite

  • @eslint/js ^9.39.1 - ESLint JavaScript configuration
  • @types/node ^24.10.0 - TypeScript definitions for Node.js
  • @types/react ^19.2.2 - TypeScript definitions for React
  • @types/react-dom ^19.2.2 - TypeScript definitions for React DOM
  • @vitejs/plugin-react ^5.1.0 - Vite plugin for React
  • eslint ^9.39.1 - JavaScript/TypeScript linter
  • eslint-plugin-react-hooks ^5.2.0 - ESLint rules for React Hooks
  • eslint-plugin-react-refresh ^0.4.24 - ESLint plugin for React Fast Refresh
  • globals ^16.5.0 - Global identifiers from different JavaScript environments
  • typescript ~5.9.3 - TypeScript compiler
  • typescript-eslint ^8.46.3 - TypeScript ESLint parser and plugin
  • vite ^7.2.2 - Next-generation frontend build tool

Backend Dependencies

Runtime Dependencies

fastapi uvicorn httpx python-dotenv motor pymongo torch transformers

  • fastapi 0.115.0 - Modern, fast web framework for building APIs
  • uvicorn 0.32.0 - ASGI server implementation
  • httpx 0.27.2 - Async HTTP client
  • python-dotenv 1.0.1 - Environment variable management
  • motor 3.3.2 - Async MongoDB driver
  • pymongo 4.6.1 - MongoDB driver for Python
  • torch >=2.0.0 - PyTorch machine learning framework
  • transformers >=4.30.0 - Hugging Face transformers for NLP/ML

πŸš€ Installation

Prerequisites

  • Node.js 18+ and npm
  • Python 3.8+
  • MongoDB 7.0+ (locally or via Docker)
  • Git

Backend Setup

  1. Install and start MongoDB:

    # Using Docker (recommended)
    docker run -d -p 27017:27017 --name mongodb mongo:7.0
    
    # Or install MongoDB locally and start it
    # mongod --dbpath /path/to/data
  2. Navigate to the backend directory:

    cd backend
  3. Create and activate a virtual environment:

    python3 -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  4. Install dependencies:

    pip install -r requirements.txt
  5. Create a .env file with your LongCat API key:

    echo "LONGCAT_API_KEY=your_actual_api_key_here" > .env
  6. Setup Piper TTS (Optional - for text-to-speech):

    a. Download Piper TTS binary for your platform:

    b. Download the en_US-amy-medium voice model:

    See backend/piper_tts/README.md for detailed instructions.

    Note: TTS is optional. The chatbot will work without it.

  7. Start the FastAPI server:

    uvicorn main:app --reload --port 5000

    The backend will run at: http://localhost:5000

Frontend Setup

  1. Install dependencies:

    npm install
  2. Start the development server:

    npm run dev

    The frontend will run at: http://localhost:5173

⚑ Usage

  1. Open the frontend in your browser (http://localhost:5173)
  2. You'll see a terminal-style interface with:
    • A "Thinking Mode" checkbox at the top
    • A chat window showing conversation history
    • An input box at the bottom for typing messages
  3. Toggle "Thinking Mode" to switch between AI models:
    • βœ… ON: Uses LongCat-Thinker (thoughtful, detailed responses)
    • ⬜ OFF: Uses LongCat-Flash-Chat (faster, concise responses)
  4. Type your message and press Enter or click SEND
  5. The AI response will appear in the terminal window
  6. The chat window will automatically scroll to show new messages
  7. If TTS is configured, AI responses will be spoken automatically
  8. Audio controls appear below each AI message for manual playback

πŸ“‘ API Endpoints

POST /chat

Send a message to the AI chatbot.

Request Body:

{
  "message": "Your question here",
  "thinking": true
}

Response:

{
  "reply": "AI response here",
  "audio_file": "speech_uuid.wav"
}

GET /messages

Fetch the last 50 messages from chat history.

Response:

{
  "messages": [
    {
      "_id": "...",
      "role": "user",
      "content": "Hello!",
      "timestamp": "2025-11-10T14:02:31.537000",
      "thinking": false,
      "model": "LongCat-Flash-Chat"
    }
  ]
}

POST /tts

Generate speech from text using Piper TTS.

Request Body:

{
  "text": "Text to convert to speech"
}

Response:

{
  "audio_file": "speech_uuid.wav"
}

GET /tts/audio/{filename}

Retrieve a generated audio file.

Response:

  • Audio file in WAV format

πŸ“‚ Folder Structure

Isabella/
β”œβ”€β”€ src/                    # Frontend React application
β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ ChatWindow.tsx
β”‚   β”‚   β”œβ”€β”€ ThinkingToggle.tsx
β”‚   β”‚   └── IsolateToggle.tsx
β”‚   β”œβ”€β”€ assets/
β”‚   β”œβ”€β”€ App.tsx
β”‚   β”œβ”€β”€ App.css
β”‚   β”œβ”€β”€ main.tsx
β”‚   └── index.css
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ config/            # Configuration modules
β”‚   β”‚   └── database.py    # MongoDB connection
β”‚   β”œβ”€β”€ models/            # Data models
β”‚   β”‚   └── chat.py
β”‚   β”œβ”€β”€ routes/            # API routes
β”‚   β”‚   β”œβ”€β”€ chat.py
β”‚   β”‚   └── tts.py
β”‚   β”œβ”€β”€ services/          # Business logic
β”‚   β”‚   β”œβ”€β”€ chat_service.py
β”‚   β”‚   └── tts_service.py
β”‚   β”œβ”€β”€ ml_models/         # Machine learning models
β”‚   β”‚   └── emotion_detector_model/
β”‚   β”œβ”€β”€ datasets/          # Training datasets
β”‚   β”‚   └── emotion_detection_dataset/
β”‚   β”œβ”€β”€ tests/             # Backend tests
β”‚   β”‚   β”œβ”€β”€ test_emotion_integration.py
β”‚   β”‚   └── test_timestamp_context.py
β”‚   β”œβ”€β”€ utils/             # Utilities
β”‚   β”‚   └── logger.py
β”‚   β”œβ”€β”€ main.py            # FastAPI entry point
β”‚   β”œβ”€β”€ requirements.txt
β”‚   β”œβ”€β”€ ARCHITECTURE.md
β”‚   β”œβ”€β”€ EMOTION_DETECTION.md
β”‚   β”œβ”€β”€ QUICKSTART_EMOTION.md
β”‚   └── README.md
β”œβ”€β”€ public/
β”‚   └── vite.svg
β”œβ”€β”€ .github/               # GitHub configuration
β”‚   β”œβ”€β”€ ISSUE_TEMPLATE/    # Issue templates
β”‚   └── pull_request_template.md
β”œβ”€β”€ package.json
β”œβ”€β”€ package-lock.json
β”œβ”€β”€ tsconfig.json
β”œβ”€β”€ tsconfig.app.json
β”œβ”€β”€ tsconfig.node.json
β”œβ”€β”€ vite.config.ts
β”œβ”€β”€ eslint.config.js
β”œβ”€β”€ index.html
β”œβ”€β”€ README.md
β”œβ”€β”€ LICENSE
β”œβ”€β”€ CONTRIBUTING.md
β”œβ”€β”€ SECURITY.md
└── CODE_OF_CONDUCT.md

πŸ’» Development

Build Frontend

npm run build

Lint Frontend

npm run lint

Preview Production Build

npm run preview

Run Backend Tests

cd backend
python -m pytest tests/

πŸ” Environment Variables

Backend .env

  • LONGCAT_API_KEY: Your LongCat API key (required)

πŸ—„ MongoDB Configuration

The application uses MongoDB to store chat history:

  • Connection URL: mongodb://127.0.0.1:27017/isabella
  • Database: isabella
  • Collection: chats

Database Schema

{
  "_id": ObjectId,
  "role": String,          // "user" or "assistant"
  "content": String,       // Message content
  "timestamp": ISODate,    // Message timestamp
  "thinking": Boolean,     // Thinking mode enabled
  "model": String          // AI model used
}

πŸ“Š Logging

The backend provides comprehensive logging for debugging and monitoring:

  • MongoDB connection status
  • All user messages and AI responses
  • Context window contents (last 10 messages sent to AI)
  • API calls and errors
  • Database operations
  • Emotion detection results

Check the server console for detailed logs of all operations.

🀝 Contributing

We welcome contributions! Please see our Contributing Guidelines for details on:

  • How to fork and contribute
  • Code style and linting rules
  • Bug reporting and feature requests
  • Testing and documentation

πŸ“œ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ›‘ Security

Security is important to us. Please see our Security Policy for information on:

  • Reporting vulnerabilities
  • Security contact information
  • Vulnerability handling process

πŸ“ Code of Conduct

This project adheres to the Contributor Covenant Code of Conduct. By participating, you are expected to uphold this code.

πŸ“ Notes

  • The backend must be running on port 5000 for the frontend to connect properly
  • MongoDB must be running on port 27017 (default)
  • Update the API URL in App.tsx if deploying to production
  • For production use, configure CORS properly in main.py with specific allowed origins
  • The terminal styling uses monospace fonts and green (#0f0) text on black (#111) background
  • Chat history is automatically loaded when the page loads
  • The AI receives the last 10 messages as context for better conversation continuity

Made with ❀ by H0NEYP0T-466

About

βš™οΈ Isabella – a full-stack πŸš€ conversational system built on FastAPI ✨ featuring 🎭 ML-powered emotion detection microservices (RoBERTa/Deep Learning) embedded into LLM, 🎀 advanced TTS generation pipeline, 🧠 memory-aware context handling,πŸ“Š rich logs + analytics , and πŸ” a dynamic persona delivering consistent, signature responses.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published