A comprehensive Flask-based intelligence briefing system that collects and analyzes news from multiple sources including RSS feeds, Reddit, and Google Trends, with AI-powered analysis capabilities.
-
Multi-Source Data Collection
- RSS feed aggregation from AI, science, and international news sources
- Reddit post collection from relevant subreddits
- Google Trends monitoring for trending topics
- Automated collection scheduling with APScheduler
-
AI-Powered Analysis
- Content quality scoring with DeepSeek and Claude APIs
- Article summarization and key insights extraction
- Trend synthesis and pattern recognition
- Alert prioritization and threat assessment
-
Web Dashboard
- Bootstrap 5 responsive interface
- Real-time statistics and visualizations
- Article categorization and search
- Source health monitoring
-
Production-Ready
- Comprehensive logging and monitoring
- Security hardening and rate limiting
- Docker containerization
- Database migrations and backups
- Performance optimization with caching
- Quick Start
- Installation
- Configuration
- Development
- Production Deployment
- API Documentation
- Contributing
- License
- Python 3.11+
- Docker and Docker Compose (for production deployment)
- PostgreSQL (for production) or SQLite (for development)
-
Clone the repository
git clone <repository-url> cd intel-brief
-
Create virtual environment
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Set up environment variables
cp .env.example .env # Edit .env with your configuration -
Initialize database
flask db init flask db migrate -m "Initial migration" flask db upgrade python app.py seed-db -
Run the application
python app.py # Or: flask run --port 5000 -
Access the application
- Main dashboard: http://localhost:5000
- Health check: http://localhost:5000/health/check
- Hardware: 2+ CPU cores, 4GB+ RAM, 10GB+ storage
- Software: Python 3.11+, PostgreSQL 13+, Redis 6+ (optional)
- Network: Internet access for data collection
-
System Dependencies (Ubuntu/Debian)
sudo apt update sudo apt install python3.11 python3.11-venv python3.11-dev sudo apt install postgresql postgresql-contrib redis-server sudo apt install build-essential libpq-dev
-
Database Setup
sudo -u postgres createuser --interactive intel_brief sudo -u postgres createdb intel_brief_db -O intel_brief
-
Python Environment
python3.11 -m venv venv source venv/bin/activate pip install --upgrade pip pip install -r requirements.txt
The system uses environment variables for configuration. Copy .env.example to .env and modify:
# Flask Configuration
FLASK_ENV=development
SECRET_KEY=your-secret-key-change-this-in-production
# Database Configuration
DATABASE_URL=sqlite:///intelligence_brief.db
# For PostgreSQL: postgresql://user:password@localhost/dbname
# AI API Keys (Optional - system works without them)
DEEPSEEK_API_KEY=your-deepseek-api-key
ANTHROPIC_API_KEY=your-anthropic-api-key
# Reddit API Configuration (Optional)
REDDIT_CLIENT_ID=your-reddit-client-id
REDDIT_CLIENT_SECRET=your-reddit-client-secret
REDDIT_USER_AGENT=IntelligenceBriefing/1.0
# Data Collection Intervals (minutes)
RSS_COLLECTION_INTERVAL=30
REDDIT_COLLECTION_INTERVAL=60
TRENDS_COLLECTION_INTERVAL=120
# AI Processing Configuration
ENABLE_AI_AGENTS=true
MAX_ARTICLES_PER_BATCH=50
AI_RETRY_COUNT=3
AI_TIMEOUT=30
# Security Configuration
RATE_LIMIT_ENABLED=true
RATE_LIMIT_PER_MINUTE=100
# Logging Configuration
LOG_LEVEL=INFO
LOG_FILE=logs/app.log
LOG_TO_FILE=true
# Monitoring Configuration
HEALTH_CHECK_ENABLED=true
METRICS_ENABLED=trueThe system comes pre-configured with high-quality sources:
AI & Technology:
- Anthropic Blog
- Simon Willison's Blog
- OpenAI Blog
- MIT Technology Review AI
Science:
- Nature News
- Science Daily
International Relations:
- Foreign Affairs
- Council on Foreign Relations
Monitored subreddits include:
- AI/ML: MachineLearning, artificial, singularity, LocalLLaMA
- Science: science, technology, Futurology, datascience
- International: worldnews, geopolitics, europe, UkrainianConflict
AI/Technology:
- artificial intelligence, machine learning, ChatGPT, GPT-4, neural networks
Science:
- climate change, quantum computing, biotechnology, space exploration, renewable energy
International:
- NATO, European Union, China trade, cybersecurity, sanctions
# Database operations
flask db init # Initialize migration repository
flask db migrate -m "message" # Create new migration
flask db upgrade # Apply migrations
flask db downgrade # Rollback migrations
# Application commands
python app.py seed-db # Seed database with initial data
python app.py collect-rss # Manual RSS collection
python app.py collect-reddit # Manual Reddit collection
python app.py collect-trends # Manual Google Trends collection
python app.py process-ai # Manual AI processing
# Testing
pytest # Run all tests
pytest tests/test_app.py # Run specific test file
pytest --cov=app # Run with coverageintel-brief/
βββ app/
β βββ __init__.py # Application factory
β βββ models.py # Database models
β βββ routes.py # Web routes and API endpoints
β βββ services/ # Business logic
β β βββ rss_collector.py # RSS feed collection
β β βββ reddit_collector.py # Reddit API integration
β β βββ trends_collector.py # Google Trends collection
β β βββ ai_agents.py # AI processing agents
β β βββ ai_pipeline.py # AI processing pipeline
β βββ templates/ # Jinja2 templates
β β βββ base.html # Base template
β β βββ dashboard.html # Main dashboard
β β βββ category.html # Category pages
β β βββ article_detail.html # Article detail
β βββ utils/ # Utilities
β βββ logging_config.py # Logging configuration
β βββ monitoring.py # Health checks and metrics
β βββ security.py # Security middleware
β βββ cache.py # Caching utilities
βββ config/
β βββ production.py # Production configuration
βββ migrations/ # Database migrations
βββ tests/ # Test suite
βββ logs/ # Application logs
βββ app.py # Application entry point
βββ config.py # Configuration
βββ requirements.txt # Python dependencies
βββ Dockerfile # Docker configuration
βββ docker-compose.yml # Docker Compose setup
βββ deploy.sh # Deployment script
βββ README.md # This file
-
New Data Source
- Create collector in
app/services/ - Add database model in
app/models.py - Create migration:
flask db migrate - Add route in
app/routes.py - Update dashboard template
- Create collector in
-
New AI Agent
- Add agent class in
app/services/ai_agents.py - Update pipeline in
app/services/ai_pipeline.py - Add performance tracking
- Add agent class in
-
New API Endpoint
- Add route in
app/routes.py - Add error handling and logging
- Apply security decorators
- Write tests
- Add route in
-
Prepare repository
# Push your code to GitHub git add . git commit -m "Prepare for Render deployment" git push origin main
-
Deploy on Render
- Go to Render Dashboard
- Click "New" β "Blueprint"
- Connect your GitHub repository
- Render will automatically detect
render.yamland deploy
-
Add environment variables (optional)
# In Render dashboard, add these if needed: DEEPSEEK_API_KEY=your-deepseek-api-key ANTHROPIC_API_KEY=your-anthropic-api-key REDDIT_CLIENT_ID=your-reddit-client-id REDDIT_CLIENT_SECRET=your-reddit-client-secret -
Access your app
- Render provides a URL like:
https://intel-brief-app.onrender.com - Health check:
https://your-app.onrender.com/health/check
- Render provides a URL like:
-
Prepare environment
# Clone repository git clone <repository-url> cd intel-brief # Copy and configure environment cp .env.example .env # Edit .env with production values
-
Deploy with script
chmod +x deploy.sh ./deploy.sh production
-
Manual Docker deployment
# Build and start services docker-compose up -d # Run migrations docker-compose exec app flask db upgrade # Seed database docker-compose exec app python app.py seed-db # Check health curl http://localhost:5000/health/check
-
Server setup
# Install dependencies sudo apt update sudo apt install python3.11 python3.11-venv nginx postgresql redis-server # Create application user sudo useradd -m -s /bin/bash intel-brief sudo su - intel-brief
-
Application setup
# Clone and setup git clone <repository-url> app cd app python3.11 -m venv venv source venv/bin/activate pip install -r requirements.txt # Configure environment cp .env.example .env # Edit .env with production settings # Setup database flask db upgrade python app.py seed-db
-
Service configuration
# Create systemd service sudo nano /etc/systemd/system/intel-brief.service[Unit] Description=Intelligence Briefing System After=network.target [Service] Type=exec User=intel-brief WorkingDirectory=/home/intel-brief/app Environment=PATH=/home/intel-brief/app/venv/bin ExecStart=/home/intel-brief/app/venv/bin/gunicorn --bind unix:/home/intel-brief/app/intel-brief.sock --workers 4 app:app Restart=always [Install] WantedBy=multi-user.target
-
Nginx configuration
server { listen 80; server_name your-domain.com; location / { proxy_pass http://unix:/home/intel-brief/app/intel-brief.sock; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
-
Health Monitoring
# Check application health curl http://localhost:5000/health/detailed # View logs docker-compose logs -f app # Or: sudo journalctl -u intel-brief -f
-
Database Backup
# Docker deployment docker-compose exec db pg_dump -U intel_brief intel_brief_db > backup_$(date +%Y%m%d).sql # VPS deployment pg_dump -U intel_brief intel_brief_db > backup_$(date +%Y%m%d).sql
-
Performance Monitoring
- Prometheus metrics:
http://localhost:5000/health/metrics - System metrics via monitoring endpoint
- Log analysis in
logs/directory
- Prometheus metrics:
GET /health/check- Basic health checkGET /health/detailed- Detailed health with metricsGET /health/metrics- Prometheus-compatible metrics
POST /api/collect-rss- Trigger RSS collectionPOST /api/collect-reddit- Trigger Reddit collectionPOST /api/collect-trends- Trigger Google Trends collectionPOST /api/process-ai-pipeline- Trigger AI processing
GET /api/stats- Application statisticsGET /- Main dashboardGET /ai- AI news sectionGET /science- Science news sectionGET /international- International relations sectionGET /article/<id>- Article detail page
POST /api/feedback- Submit user feedbackPOST /api/mark-alert-read/<id>- Mark alert as read
# Install test dependencies
pip install pytest pytest-flask pytest-cov
# Run all tests
pytest
# Run with coverage
pytest --cov=app --cov-report=html
# Run specific test categories
pytest tests/test_app.py # Application tests
pytest tests/test_models.py # Model tests
pytest tests/test_collectors.py # Collector tests- Unit Tests - Individual component testing
- Integration Tests - Component interaction testing
- API Tests - Endpoint functionality testing
- Model Tests - Database model testing
def test_new_feature(client, app):
\"\"\"Test new feature functionality\"\"\"
with app.app_context():
# Test implementation
response = client.get('/new-endpoint')
assert response.status_code == 200- Authentication & Authorization - Session-based security
- Input Validation - SQL injection and XSS prevention
- Rate Limiting - API abuse protection
- Security Headers - CSRF, XSS, clickjacking protection
- HTTPS Enforcement - TLS/SSL in production
- Content Security Policy - Script injection prevention
# Security settings in production.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
SESSION_COOKIE_SAMESITE = 'Lax'
FORCE_HTTPS = True
RATE_LIMIT_ENABLED = True- Environment Variables - Never commit secrets to code
- Regular Updates - Keep dependencies updated
- Access Control - Limit API access with keys
- Monitoring - Log security events
- Backups - Regular encrypted backups
-
Database Connection Error
# Check database status docker-compose ps db # Or: sudo systemctl status postgresql # Check connection psql -h localhost -U intel_brief -d intel_brief_db
-
API Key Issues
# Check environment variables echo $ANTHROPIC_API_KEY # Test API connection curl -H "Authorization: Bearer $ANTHROPIC_API_KEY" https://api.anthropic.com/v1/complete
-
Collection Not Working
# Check logs docker-compose logs app | grep collector # Manual test docker-compose exec app python app.py collect-rss
-
High Memory Usage
# Check system resources docker stats # Clear cache docker-compose exec app python -c "from app.utils.cache import cache; cache.clear()"
# Application logs
tail -f logs/app.log
# Error logs
tail -f logs/error.log
# Collection logs
tail -f logs/collection.log
# AI processing logs
tail -f logs/ai_processing.log-
Database Optimization
- Regular VACUUM and ANALYZE
- Proper indexing
- Connection pooling
-
Caching Strategy
- Dashboard statistics caching
- Article list caching
- Source health caching
-
Resource Monitoring
- CPU and memory usage
- Database performance
- API response times
-
Backup Data
./deploy.sh backup
-
Pull Updates
git pull origin main
-
Update Dependencies
pip install -r requirements.txt
-
Run Migrations
flask db upgrade
-
Restart Services
docker-compose restart
- Daily: Check logs and health status
- Weekly: Review system metrics and performance
- Monthly: Update dependencies and security patches
- Quarterly: Full system backup and disaster recovery test
This project is licensed under the MIT License - see the LICENSE file for details.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow PEP 8 style guidelines
- Write comprehensive tests
- Document new features
- Update README for significant changes
For support and questions:
- Check the documentation
- Search existing issues
- Create a new issue with details
Intelligence Briefing System - Keeping you informed with AI-powered intelligence gathering and analysis.