I developed a Telegram API based app that receives messages from an input channel, translates them, and forwards them to an output channel. The app uses OpenAI GPT to translate the messages and utilizes the Telegram API for communication.
- How it works
- Project structure
- Requirements
- AWS Setup Overview
- EC2 Setup Overview
- Local Installation
- CI/CD Flow
- License
- The app listens for incoming messages in a Telegram input channel.
- It translates the text using OpenAI GPT and sends it along with any associated media to an output channel.
- The app can also process album messages that contain multiple parts of a single message.
.
├── main.py
├── src/
│ ├── config.py
│ ├── clients/
│ │ ├── openai.py
│ │ └── telegram.py
│ ├── services/
│ │ ├── messages.py
│ │ └── translation.py
│ ├── templates/
│ │ └── index.html
│ └── utils.py
- Python 3.13
- An OpenAI API key
- A Telegram API ID
- A Telegram API Hash
This setup uses one single Availability Zone, one public subnet, and one EC2 instance.
It's definitely not production-ready,there's no high availability, no horizontal scaling, and everything sits in a public subnet.
However, the goal here was to keep things simple and easy to follow for training and learning purposes, especially Terraform.
The Flask server is secured as well as possible for now. It runs on an open HTTP port 80 (not secure!), but access is restricted to the committing admin’s IP address, which is provided as a variable during terraform apply
.
Once the user successfully logs in via the Telegram API, the Flask server shuts down. The Telethon event loop takes over and continuously handles incoming requests and events, managing the interaction with the Telegram channels.
-
When the
main
branch of the GitHub repository is updated, a GitHub Action is triggered. -
The workflow zips the updated source code and uploads it to an S3 bucket.
-
Upon detecting a new object in the S3 bucket, an AWS Lambda function is invoked.
-
The Lambda function terminates the existing EC2 instance.
-
The Auto Scaling Group launches a new EC2 instance automatically.
-
During instance launch, the
user-data
script:- Downloads the source code from the S3 bucket
- Builds a new Docker image
- Starts the container
-
Inside the container:
- Dependencies are installed
main.py
is executed
The deployment process is fully automated up to the point where the application requires Telegram authentication.
Since Telegram sends a login code to an existing device, this step must be done manually.
To simplify that, a temporary Flask web server runs on the EC2 instance, allowing the user to enter the code. Once submitted, the server shuts down automatically.
Not secure for production – no authentication on the Flask form.
Create a virtual environment to isolate dependencies:
python -m venv venv
Activate the virtual environment:
- Linux/macOS:
source venv/bin/activate
- Windows:
venv\Scripts\activate
Install all required dependencies defined in requirements.txt:
pip install -r requirements.txt
Create a .env file in the root directory and add your API keys and tokens:
OPENAI_API_KEY=<your-openai-api-key>
TELEGRAM_API_ID=<your-telegram-api-id>
TELEGRAM_API_HASH=<your-telegram-api-hash>
INPUT_CHANNEL=<your-input-channel-id>
OUTPUT_CHANNEL=<your-output-channel-id>
Start the app with the following command:
python main.py
After login, the app will begin listening for messages in the INPUT_CHANNEL and send them to the OUTPUT_CHANNEL after translation.
This project is licensed under the MIT License.