Humanize AI is a text rewriting and paraphrasing service that leverages advanced T5-based models from Hugging Face. The goal is to generate text that preserves meaning while making it sound more natural and less detectable by AI detectors.
Note: This project is still under active development and not all features are finalized.
Note: Visit the frontend site with the link https://humanize-ai-frontend.vercel.app
.
├── api
│ ├── index.py # Main Flask server
│ ├── download_model.py # Script to download and save T5 model locally
│ ├── models.txt # List of model names or references
│ ├── requirements.txt # Python dependencies
│ ├── vercel.json # Vercel deployment configuration
│ ├── makefile # (Optional) Make commands for building/deployment
│ └── ...
├── .env # Environment variables (HF_TOKEN, etc.)
└── .gitignore
- index.py: Contains your Flask application with endpoints (
/,/hello,/paraphrase). - download_model.py: Downloads the
t5-smallmodel and tokenizer to a local folder. - models.txt: Lists models or references to be used or downloaded.
- requirements.txt: Lists Python dependencies (Flask, requests, python-dotenv, etc.).
- vercel.json: Configuration file if you choose to deploy on Vercel.
-
Paraphrasing
Leverages a T5-based paraphrasing model (e.g.,humarin/chatgpt_paraphraser_on_T5_baseorVamsi/T5_Paraphrase_Paws) via the Hugging Face Inference API. -
Custom Models
You can switch out the model URL inindex.py(theHF_API_URL) to any other Hugging Face model endpoint. -
Environment Variables
Uses a.envfile to store sensitive tokens (e.g.,HF_TOKEN). -
Logging
Uses Python’sloggingmodule to debug and monitor API behavior.
- Python 3.11+
- Hugging Face Account & Token
- Generate a token from your Hugging Face profile settings.
- Create a
.envfile and addHF_TOKEN=your_huggingface_token_here.
-
Clone the repository
git clone https://github.com/ADEMOLA200/Humanize-AI-Server.git cd Humanize-AI-Server/api -
Create a virtual environment (recommended)
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install Python dependencies
pip install -r requirements.txt
-
Set up environment variables
- Create a
.envfile in the project root orapifolder with:HF_TOKEN=your_huggingface_token_here - Make sure to keep this file secret or add it to your
.gitignore.
- Create a
-
(Optional) Download a local T5 model
- If you want to download and store a local T5 model (e.g.,
t5-small), run:python download_model.py
- This will create a folder
./models/t5-smallwith the tokenizer and model weights. - Note: The current Flask code in
index.pyuses the Hugging Face Inference API, so downloading the model locally is optional.
- If you want to download and store a local T5 model (e.g.,
From inside the api directory, run:
python index.pyThe Flask server will start on http://127.0.0.1:5001 by default (or another port if you set the PORT environment variable).
-
Health Check
- GET /
- Example:
curl http://127.0.0.1:5001/ - Response:
Server operational!
-
Hello Test
- GET /hello
- Example:
curl http://127.0.0.1:5001/hello - Response:
Hello, Vercel deployment works!
-
Paraphrase
- POST /paraphrase
- Request Body (JSON):
{ "text": "Your text to paraphrase goes here." } - Response (JSON):
{ "paraphrased": "The generated paraphrased text.", "success": true }
This project includes a vercel.json configuration for deploying a Python server.
- Install the Vercel CLI.
- Run
vercelin the project root and follow the prompts.
Note: Python/Flask on Vercel typically uses the Serverless Functions approach. Make sure your vercel.json and project structure match Vercel’s Python requirements.
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Commit your changes with clear messages.
- Submit a Pull Request to the main branch.
This project is licensed under the MIT License.
For any questions or suggestions, please open an issue in the repository or contact [email protected].
This project is still under active development. The paraphrasing and “humanization” features may not be perfect, and the underlying model can be changed or improved at any time. Feel free to experiment with different Hugging Face models for better performance.