Skip to content

STYLO009/GEN_AI_LESSONS_NEW

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

20 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿค– GenAI Project (Local LLM with Ollama)

๐Ÿ“Œ About

This repository contains a Generative AI (GenAI) application built using Ollama and modern LLM frameworks. The goal of this project is to run powerful AI models locally for tasks like chat, reasoning, and automation.

Unlike cloud-based AI tools, this project ensures:

  • ๐Ÿ” Privacy (runs completely on your machine)
  • โšก Low latency responses
  • ๐Ÿ’ป Offline capability (after model download)

๐Ÿš€ What is GenAI?

Generative AI refers to models that can generate content such as:

  • Text ๐Ÿ“
  • Code ๐Ÿ’ป
  • Conversations ๐Ÿ’ฌ
  • Ideas ๐Ÿ’ก

This project leverages LLMs (Large Language Models) to simulate intelligent responses.


โœจ Features

  • ๐Ÿง  Local LLM execution using Ollama
  • ๐Ÿ’ฌ Chat-based interaction system
  • โšก Fast and efficient responses
  • ๐Ÿ”„ Easy model switching (LLaMA, Mistral, etc.)
  • ๐Ÿ› ๏ธ Beginner-friendly setup

๐Ÿ› ๏ธ Tech Stack

  • Python ๐Ÿ
  • Ollama ๐Ÿง 
  • LangChain (optional)
  • Environment Variables (.env)

โš™๏ธ Installation Guide

1๏ธโƒฃ Clone the Repository

git clone https://github.com/your-username/genai-project.git
cd genai-project

2๏ธโƒฃ Setup Virtual Environment

python -m venv venv

Activate:

  • Windows:
venv\Scripts\activate
  • Linux/Mac:
source venv/bin/activate

3๏ธโƒฃ Install Dependencies

pip install -r requirements.txt

๐Ÿง  Setup Ollama

Install Ollama

Download from: https://ollama.com


Pull a Model

ollama pull llama3

You can also use:

ollama pull mistral

Run the Model

ollama run llama3

โ–ถ๏ธ Run the Application

python app.py

๐Ÿ”„ Git Workflow (Clone โ†’ Commit โ†’ Push)

Clone

git clone https://github.com/your-username/genai-project.git

Add Changes

git add .

Commit

git commit -m "Added GenAI feature"

Push

git push origin main

What about the HuggingFacemodel that can be used for sure response for Text-generation ??

The models are :

  1. meta-llama/Llama-3.1-8B-Instruct
  2. deepseek-ai/DeepSeek-R1
  3. qwen2.5-coder

How to run Ollama for offline access --

  • ollama pull qwen2.5-coder
  • ollama run qwen2.5-coder
  • ollama pull llama3.2
  • ollama run llama3.2

For sureshot access

๐Ÿค– FastAPI AI Chat API

An AI-powered REST API built using FastAPI and Hugging Face LLM.

๐Ÿš€ Features

  • FastAPI backend
  • AI Chat endpoint using Mistral-7B
  • Clean modular structure
  • Postman tested APIs

๐Ÿ› ๏ธ Tech Stack

  • FastAPI
  • LangChain
  • Hugging Face
  • Python

IF not worked

Install the installations that are :

  • pip install langchain langchain-huggingface
  • pip install huggingface_hub transformers sentence-transformers

How the Structure Data is working

  • TypedDict Schema
  • Pydantic
  • JSON

๐Ÿ“‚ Project Structure

genai-project/
โ”‚โ”€โ”€ app.py
โ”‚โ”€โ”€ requirements.txt
โ”‚โ”€โ”€ .env
โ”‚โ”€โ”€ README.md

๐Ÿ’ก Use Cases

  • ๐Ÿค– Chatbot development
  • ๐Ÿง‘โ€๐Ÿ’ป Code generation
  • ๐Ÿ“š Learning AI concepts
  • ๐Ÿง  Local AI assistants

๐Ÿ”ฎ Future Scope

  • ๐ŸŒ Web-based UI
  • ๐Ÿงพ Memory-based conversations
  • ๐Ÿ”Š Voice interaction
  • ๐Ÿ“Š Integration with databases

๐Ÿค Contributing

Contributions are welcome! Feel free to fork and improve this project.


๐Ÿ“œ License

MIT License


๐Ÿ™Œ Credits

  • Ollama
  • Open-source LLM community

โญ Star this repo if you found it helpful!

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages