Recon Command Center is a single-file orchestrator for common reconnaissance pipelines. It runs traditional subdomain enumeration tools, probes discovered hosts, and executes vulnerability scanning workflows while presenting live progress in a rich web UI.
- Full pipeline automation – Amass/Subfinder/Assetfinder/Findomain/Sublist3r feed ffuf, httpx, screenshot capture, nuclei, and nikto in one go.
- User authentication & management – Secure login system with admin and regular user roles. Create, edit, and delete users through the web UI. See USER_MANAGEMENT_AND_HTTPS.md for details.
- HTTPS support – Serve the web UI over HTTPS with automatic self-signed certificate generation or bring your own certificates. See USER_MANAGEMENT_AND_HTTPS.md for details.
- Stateful & resumable – Results live in
recon_data/state.json, so re-running a target picks up exactly where it left off. Jobs can be paused/resumed live. - Persistent job reports – Completed scan reports remain visible in the dashboard with completion timestamps. All job history persists across restarts in
recon_data/completed_jobs.json. - Live dashboard – A modern SPA served from
main.pytracks jobs, queue, worker slots, tool availability, and detailed per-program reports. - System resource monitoring – Real-time monitoring of CPU, memory, disk, and network usage with automatic warnings when thresholds are exceeded. Helps ensure the system isn't overwhelmed.
- System Logs – Dedicated logs view with advanced filtering (by source, level, text search) and sorting. Filter preferences persist between reloads.
- Automatic file cleanup – Automatically removes old temporary files, scan results, and backups to keep disk usage under control. Configurable retention periods for different file types. See CLEANUP_AND_PAGINATION.md for details.
- Actionable reports – Each target gets a dedicated page with sortable/filterable tables, paginated views, per-tool sections, command history, severity badges, and a progress overview.
- Screenshots gallery with pagination – Browse large collections of screenshots with pagination controls (configurable page size, top/bottom navigation). See CLEANUP_AND_PAGINATION.md for details.
- Command history & exports – Every command executed is logged; you can export JSON or CSV snapshots at any time.
- Monitors – Point the UI at a newline-delimited URL (supports wildcards like
*.corp.comorcorp.*). The monitor polls the file, launches new jobs when entries appear, and surfaces health/status in its own tab. - Concurrency controls – Configure max running jobs and per-tool worker caps so scans behave on your box.
- Auto-install helpers – Best-effort installers kick in when a required tool is missing.
- Docker support – Multi-platform Docker container with all tools pre-installed. Works on Linux (amd64, arm64, armv7).
- Dynamic queue management to fit YOUR pc:

- Auto backup + backup and restore:

- Enter a domain or wildcard domain:

- Get a detailed report:

- A screenshot gallery:

- Full logging and monitoring:

- Monitoring your system so you can make sure not to overload it:

- Pipeline of your favourite tools:

- Flow overview:

- Detailed subdomain pages:

- Add your own flags to the tools:
] - Endpoint enum:

When you run subScraper for the first time, an interactive setup wizard will guide you through configuring the essential settings:
# Install dependencies
pip3 install -r requirements.txt
# First run - setup wizard will launch automatically
python3 main.pyThe setup wizard will:
- Configure basic settings (wordlist path, concurrent jobs, nikto preferences)
- Set up API keys for tools like Amass and Subfinder (optional but recommended)
- Create configuration files for all tools
- Display clear next steps to get started
Skip Setup (Not Recommended):
# Skip the setup wizard (you can configure later via web UI)
python3 main.py --skip-setup# After setup, launch the web UI (default: http://0.0.0.0:8342)
python3 main.py
# Launch with HTTPS (auto-generates self-signed certificate)
python3 main.py --https
# Launch with custom SSL certificate
python3 main.py --https --cert /path/to/cert.pem --key /path/to/key.pem
# Run a one-off target directly from the CLI
python3 main.py example.com --wordlist ./w.txt --skip-nikto
# Wildcards are supported
python3 main.py 'acme.*' # expands using Settings ➜ wildcard TLDs
python3 main.py '*.apps.acme.com'The easiest way to get started with all tools pre-installed:
Option 1: Docker Compose (Recommended)
# Start the service
docker-compose up -d
# View logs
docker-compose logs -f
# Access the web interface at http://localhost:8342Option 2: Docker CLI
# Build the container (see DOCKER_BUILD.md for Mac-specific instructions)
docker build -t subscraper:latest .
# Run the container
docker run -d \
--name subscraper \
-p 8342:8342 \
-v $(pwd)/recon_data:/app/recon_data \
subscraper:latest
# Access the web interface at http://localhost:8342Important: Always mount the recon_data volume to persist:
- Scan results and completed job reports
- Configuration settings
- Screenshots and backups
For detailed Docker build instructions including multi-platform builds for Mac, see DOCKER_BUILD.md.
Inside the UI you can:
- Launch new jobs from the Overview module.
- Pause/resume running jobs in the Jobs module.
- Inspect tool/worker utilization in Workers – now with per-tool queue status.
- Monitor system resources in the System Resources tab – View real-time CPU, memory, disk, and network usage with automatic warnings.
- View system logs with filtering and sorting in the Logs tab (filter by source, level, or search text).
- Drill into the revamped Reports page to see per-program progress, completed vs pending steps, collapsible per-tool sections, paginated tables, and the max-severity badge.
- Configure monitoring feeds under the Monitors tab – each monitor shows polling health, last fetch, number of pending entries, and per-entry dispatch status.
- Configure per-tool concurrency limits in Settings – Each of the 16 tools has independent concurrency and queue settings.
- Export raw data or tweak defaults in Settings (concurrency, wordlists, skip flags, wildcard TLD expansion, etc.).
All output (jsonl history, tool artifacts, screenshots, monitor metadata) lives under recon_data/, making it easy to version, sync, or analyze with other tooling.
The System Resources tab provides comprehensive real-time monitoring to help ensure your scans don't overwhelm the system:
- CPU Usage: Overall CPU utilization, per-core usage, load averages, and frequency
- Memory Usage: RAM consumption, available memory, and swap usage
- Disk Usage: Storage consumption, I/O operations (reads/writes)
- Network I/O: Bytes and packets sent/received, errors and drops
- Application Metrics: Process-specific CPU and memory usage, thread count
- Real-time Updates: Metrics refresh every 5 seconds
- Historical Data: View usage trends over the last 5 minutes with sparkline charts
- Automatic Warnings: Get alerts when resource usage exceeds safe thresholds:
- CPU > 75% (Warning), > 90% (Critical)
- Memory > 80% (Warning), > 90% (Critical)
- Disk > 85% (Warning), > 95% (Critical)
- Swap > 50% (Warning - indicates memory pressure)
- Visual Indicators: Color-coded cards (green=normal, orange=warning, red=critical)
- Persistent State: Resource history is saved to disk for analysis
Access resource metrics programmatically:
# Get current system resources
curl http://127.0.0.1:8342/api/system-resourcesResponse format:
{
"current": {
"available": true,
"timestamp": "2025-12-17T17:30:00Z",
"cpu": {
"percent": 45.2,
"per_core": [52.1, 38.3, ...],
"count_logical": 8,
"count_physical": 4,
"frequency_mhz": 2400,
"load_avg_1m": 2.5,
"load_avg_5m": 2.2,
"load_avg_15m": 1.8
},
"memory": {
"total_gb": 16.0,
"used_gb": 8.5,
"available_gb": 7.5,
"percent": 53.1
},
"warnings": [...]
},
"history": [...]
}The project intentionally stays self-contained:
- Everything (scheduler, API server, UI) lives in
main.py. - No third-party web framework; the UI is rendered client-side with vanilla JS/HTML/CSS embedded in the script.
- Concurrency is managed with Python threads and lightweight gates (
ToolGate) to keep tool usage predictable. - State files are protected with a simple file lock to avoid concurrent writes.
- System resource monitoring uses
psutilfor cross-platform compatibility.
# Format / validate
python3 -m py_compile main.py
# Inspect current jobs / queues
curl http://127.0.0.1:8342/api/state | jq
# Export recent command history for a program
curl 'http://127.0.0.1:8342/api/history/commands?domain=example.com'
# Monitor system resources
curl http://127.0.0.1:8342/api/system-resources | jqFeel free to tailor the pipeline order, add custom steps, or integrate additional tooling. Contributions welcome!