An automated Twitter (X) assistant that fetches niche news, distills each article into tweet sized summaries and posts the tweets on your behalf. You can check my X account seabasszealot, being ran by it.
- Automated posting: This project uses github workflow and github actions to schedule the posts hourly or so(personal choice), you can modify it to your taste by changeing the TWEET_COUNT in config.py and updating the cron in tweet.yml to your choice.
- News Fetching: Queries NewsAPI with configurable keywords and normalizes article metadata.
- AI Summarisation: Uses Hugging Face's BART pipeline to produce concise tweet candidates and score them by keyword relevance.
- Hands-Off Posting: Logs on your behalf into your X (Twitter) account and posts the selected tweets in sequence.
- Local Persistence: Stores the daily batch as well as the long-term tweet history on disk for auditing.
- Duplicates filtering: Each tweet is unique, as the X (Twitter) imposes so the account isn't considered a bot and can longer tweet as necessary.
[Scheduler/manual run]
|
v
main.py ─▶ fetcher.fetch_news() ─▶ summarizer.summarize_article()
| |
| └─▶ score_summary()
└─▶ storage.save_daily_tweets()
│
├─▶ storage.filter_duplicates() └─▶ storage.save_tweets_to_history()
│
└─▶ tweeter.tweet_daily()
config/config.py– Lists the niche keywords, article cap, and daily tweet count.fetcher.py– Builds the NewsAPI query and extracts title/description/URL triples.summarizer.py– Loads the BART summarisation pipeline and scores summaries by keyword hits.storage.py– Persists tweet history (data/tweets.txt) and the current batch (data/daily_tweets.txt).tweeter.py– Wrapstweety.TweetClientto log in and post each tweet with basic error handling.helper.py– Optional console helper for printing fetched articles during debugging.
ai-twitter-bot/
├── architecture.txt # Textual architecture overview
├── config/
│ └── config.py # Keyword and limit configuration
├── data/
│ ├── daily_tweets.txt # Current cycle’s tweets
│ └── tweets.txt # Long-term tweet history
├── fetcher.py # NewsAPI client
├── helper.py # Debug print helpers
├── main.py # Orchestrates the end-to-end workflow
├── requirements.txt # Python dependencies
├── storage.py # Disk persistence and deduplication helpers
├── summarizer.py # Hugging Face summarisation pipeline
├── tweeter.py # X/Twitter posting utilities
└── README.md
- Clone the repo
git clone https://github.com/ThePhoenix77/ai-twitter-bot.git cd ai-twitter-bot - Install dependencies
python -m venv .venv source .venv/bin/activate pip install -r requirements.txt - Create a
.envfile in the project root:NEWS_API_KEY=your_newsapi_key X_API_KEY=your_x_api_key X_API_KEY_SECRET=your_x_api_key_secret ACCESS_TOKEN=your_x_access_token ACCESS_TOKEN_SECRET=your_x_access_token_secret
summarizer.pywill download the BART weights the first time it runs; keep the environment active until it completes. - Review configuration in
config/config.pyto adjust keywords, fetch limit, or number of tweets to publish per run.
- Dry run (no posting): Comment out the
tweet_dailycall inmain.pyto inspect the summaries first. - Full run:
The script fetches articles, prints the top scoring summaries, saves them under
python3.11 main.py
data/, and posts any tweets that are not yet in the history file.
- Adding semantic similarity checks (e.g., embeddings) instead of keyword scoring alone.
- Expanding to multiple niches by parameterising the configuration or loading from external files.
- Introducing richer logging or notifications for failures.
Pull requests are welcome. Please run your changes locally and ensure python main.py completes without errors.
MIT License.