This project is a small but professional space situational awareness (SSA) testbed for orbital conjunction analysis.
The focus is on a clean architecture and realistic workflows – not on controlling real satellites. All orbital objects, TLE-like parameters and scenarios are purely synthetic and are used only to demonstrate the architecture and data flow.
The lab is designed as a portfolio project to demonstrate that you can:
- model a simplified SSA / conjunction analysis workflow,
- design and implement a distributed backend with clear responsibilities,
- work with containerised microservices (Docker, Docker Compose),
- integrate a relational database (PostgreSQL, SQLAlchemy ORM),
- expose a clean REST API with FastAPI and OpenAPI documentation,
- conceptually integrate an embedded / QEMU-based sensor node.
You can extend or adapt the project without changing the overall idea: it is meant to look like a realistic “mini ground segment” for space-domain data processing, but without any dependency on real-world or classified data.
The lab consists of three main components:
- Receives analysis jobs via REST (
POST /jobs). - Stores jobs and their configuration in a relational database.
- Provides endpoints to inspect jobs and their conjunction results:
GET /jobs– list jobs,GET /jobs/{id}– job metadata and processing status,GET /jobs/{id}/conjunctions– all close-approach events for a job.
- Exposes an endpoint for measurement ingestion from external sensor nodes:
POST /measurements– store arbitrary JSON payloads from remote nodes.
- Runs as a separate container with its own process and lifecycle.
- Polls the database for pending jobs (
status = PENDING). - Changes the status to
RUNNING, executes the orbital conjunction analysis and computes a simple risk level. - Writes the resulting events into the
conjunctionstable. - Marks the job as
DONEorFAILEDdepending on the outcome.
This mirrors patterns that are common in production systems: separation of API and long-running analysis, asynchronous processing and clear state transitions for each job.
- Stores jobs, conjunction events and measurement records.
- Uses SQLAlchemy ORM models:
Job– analysis job including raw scenario JSON and status,Conjunction– individual close-approach events with risk score,Measurement– simplified sensor measurements from external nodes.
The database is started automatically via Docker Compose and is only exposed inside the Docker network. The API and the worker connect using a DSN such as:
postgresql+psycopg2://ssa:ssa@db:5432/ssa_lab
Additionally, there is a conceptual QEMU / ARM Linux sensor node example
under qemu-node/ that shows how a remote embedded device could send simple
measurement payloads into the API:
sensor-agent.py– small Python script that sends JSON payloads toPOST /measurementsin periodic intervals.sensor-agent.service– example systemd unit file to run the agent on boot on an embedded Linux system.README-qemu-node.md– notes for integrating such an agent into a QEMU-based ARM image.
No specific organisation or military actor is referenced; the project is meant as a generic portfolio piece for SSA / orbital safety and distributed system design with Python and Docker.
- Docker and Docker Compose (or the
docker composesubcommand) - Internet access for pulling the PostgreSQL base image
From the project root:
docker compose up --buildThis will start three containers:
db– PostgreSQL database,api– FastAPI service on port8000,worker– background analysis loop.
Once everything is up, open:
to explore the API using the automatically generated OpenAPI/Swagger UI.
You can also check basic health from the command line:
curl http://localhost:8000/healthExpected response:
{"status": "ok"}This section shows a typical end-to-end workflow using the /jobs and
/conjunctions endpoints.
Use POST /jobs with a JSON body similar to:
{
"name": "Example-LEO-Scenario",
"scenario": {
"duration_hours": 1.0,
"step_seconds": 60.0,
"miss_distance_km": 500.0,
"objects": [
{
"name": "SAT-X",
"a_km": 6870,
"e": 0.0,
"inc_deg": 53.0,
"raan_deg": 10.0,
"argp_deg": 0.0,
"ta_deg": 0.0
},
{
"name": "SAT-Y",
"a_km": 6870,
"e": 0.0,
"inc_deg": 53.0,
"raan_deg": 10.5,
"argp_deg": 0.0,
"ta_deg": 5.0
}
]
}
}The API will respond with a job object, for example:
{
"id": 5,
"name": "Example-LEO-Scenario",
"status": "PENDING",
"created_at": "2025-12-03T11:40:21.319019",
"updated_at": "2025-12-03T11:40:21.319022"
}The worker container periodically polls the database. When it finds a job
with status PENDING, it:
- Sets the status to
RUNNING, - Runs the conjunction analysis (see section 5),
- Writes all detected events into the
conjunctionstable, - Sets the job status to
DONEorFAILED.
You can observe this behaviour with:
docker compose logs -f workerList all jobs:
curl http://localhost:8000/jobsGet details for a specific job:
curl http://localhost:8000/jobs/5Retrieve all conjunctions for that job:
curl http://localhost:8000/jobs/5/conjunctionsA successful job may return data like:
[
{
"id": 1,
"obj1": "SAT-X",
"obj2": "SAT-Y",
"tca_seconds": 1800.0,
"miss_distance_km": 120.3,
"risk_score": 0.4,
"risk_level": "LOW"
}
]If the list is empty ([]), the scenario did not produce any close approaches
below the configured miss_distance_km threshold, which is also a valid and
useful outcome.
The physics model in this lab is intentionally simple and is not meant for operational use. Its purpose is to show that you understand the basic ideas behind conjunction analysis and can implement them in code.
Each job contains a JSON-encoded scenario with:
- global parameters:
duration_hours– total propagation time,step_seconds– time step for sampling,miss_distance_km– distance threshold for “close approach”;
- a list of orbital objects, each with:
name– identifier,a_km– semi-major axis in kilometres,e– eccentricity (here usually near zero),inc_deg– inclination in degrees,raan_deg– right ascension of ascending node,argp_deg– argument of perigee,ta_deg– true anomaly.
The code in ssa.py performs the following steps:
- Converts each object’s orbital elements into a simple 3D position vector in an Earth-centred inertial-like frame.
- Derives a constant angular rate for each object based on its orbital
radius (very rough approximation using
sqrt(mu / r^3)). - Propagates the positions over time by rotating the vectors around the Earth’s axis.
- For each time step, computes pairwise distances between all objects.
- Tracks the minimum distance and time of closest approach (TCA) for each object pair.
- If the minimum distance is below
miss_distance_km, emits a conjunction event with:obj1,obj2– object names,tca_seconds– time since start of scenario,miss_distance_km– closest distance in km,risk_score– simple score derived from distance,risk_level–LOW,MEDIUMorHIGH.
This structure is very similar to real SSA pipelines, but with a deliberately lightweight propagation model and no external ephemeris data.
The /measurements endpoint allows you to simulate external sensors or
ground stations that send additional context data into the system.
A typical request body looks like:
{
"source_id": "sensor-001",
"timestamp": "2025-12-03T11:45:00Z",
"payload": {
"az_deg": 180.0,
"el_deg": 45.0,
"snr_db": 32.5,
"note": "synthetic measurement from example agent"
}
}The API stores the raw JSON payload in the database and returns a record with an internal ID and the stored timestamp. The QEMU / ARM example agent sends these payloads periodically as a small, self-contained service.
If you prefer to run only the API on your local machine (for example using
SQLite instead of PostgreSQL), you can adapt backend/app/db.py to your
local settings and then:
cd backend
python -m venv .venv
source .venv/bin/activate # on Windows: .venv\Scripts�ctivate
pip install -r requirements.txt
uvicorn app.main:app --reloadThe distributed Docker setup is recommended for showcasing the full architecture in a portfolio context, but the code is structured so that you can also run and debug it in a standard Python virtual environment.
Even though this is a small lab project, several good practices are applied:
- Separation of concerns – API, worker and database are separate services.
- Container isolation – each component runs in its own Docker container.
- Database credentials via environment variables – the DSN is injected as
DB_DSNand not hard-coded in the code. - No external network exposure for the database – only the Docker network is used by default.
- Explicit job states –
PENDING,RUNNING,DONE,FAILED, which makes the processing status transparent and debuggable.
You can extend this further with authentication, HTTPS termination, message queues, structured logging or metrics endpoints if required.
This lab is intentionally small and focused. Some obvious extensions are:
- replace the simplified propagation with a more accurate orbital mechanics library or real TLE parsing,
- add authentication and basic authorisation to the API,
- provide a small web dashboard to visualise jobs and conjunction timelines,
- ingest and correlate more advanced measurement payloads,
- explore more sophisticated risk scoring models.
Even in its current form, the project demonstrates end-to-end handling of space-domain workloads: from scenario submission via REST to processed conjunction events in a relational database, including conceptual integration of remote sensor nodes.
All scenarios, orbital parameters and measurements in this lab are synthetic and simplified. The physics model is intentionally approximate and is not intended for operational use. The goal is to demonstrate:
- understanding of SSA / orbital conjunction concepts,
- Python-based analysis pipelines,
- containerised services with clear responsibilities,
- a clean, well-documented project structure that can be shown in technical interviews and applications.