Skip to content

Verteiltes Testbed für Space Situational Awareness (SSA): Docker-basierte FastAPI-/Worker-Architektur zur Analyse von orbitalen Konjunktionsereignissen mit PostgreSQL-Backend und optionalem QEMU-Sensor-Node.

Notifications You must be signed in to change notification settings

FirdevsTorlak/ssa-conjunction-lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SSA Conjunction Lab – Distributed Testbed

This project is a small but professional space situational awareness (SSA) testbed for orbital conjunction analysis.

The focus is on a clean architecture and realistic workflows – not on controlling real satellites. All orbital objects, TLE-like parameters and scenarios are purely synthetic and are used only to demonstrate the architecture and data flow.


1. Goals and Features

The lab is designed as a portfolio project to demonstrate that you can:

  • model a simplified SSA / conjunction analysis workflow,
  • design and implement a distributed backend with clear responsibilities,
  • work with containerised microservices (Docker, Docker Compose),
  • integrate a relational database (PostgreSQL, SQLAlchemy ORM),
  • expose a clean REST API with FastAPI and OpenAPI documentation,
  • conceptually integrate an embedded / QEMU-based sensor node.

You can extend or adapt the project without changing the overall idea: it is meant to look like a realistic “mini ground segment” for space-domain data processing, but without any dependency on real-world or classified data.


2. Architecture Overview

The lab consists of three main components:

2.1 API service (FastAPI, Docker)

  • Receives analysis jobs via REST (POST /jobs).
  • Stores jobs and their configuration in a relational database.
  • Provides endpoints to inspect jobs and their conjunction results:
    • GET /jobs – list jobs,
    • GET /jobs/{id} – job metadata and processing status,
    • GET /jobs/{id}/conjunctions – all close-approach events for a job.
  • Exposes an endpoint for measurement ingestion from external sensor nodes:
    • POST /measurements – store arbitrary JSON payloads from remote nodes.

2.2 Background worker (Docker)

  • Runs as a separate container with its own process and lifecycle.
  • Polls the database for pending jobs (status = PENDING).
  • Changes the status to RUNNING, executes the orbital conjunction analysis and computes a simple risk level.
  • Writes the resulting events into the conjunctions table.
  • Marks the job as DONE or FAILED depending on the outcome.

This mirrors patterns that are common in production systems: separation of API and long-running analysis, asynchronous processing and clear state transitions for each job.

2.3 Database (PostgreSQL, Docker)

  • Stores jobs, conjunction events and measurement records.
  • Uses SQLAlchemy ORM models:
    • Job – analysis job including raw scenario JSON and status,
    • Conjunction – individual close-approach events with risk score,
    • Measurement – simplified sensor measurements from external nodes.

The database is started automatically via Docker Compose and is only exposed inside the Docker network. The API and the worker connect using a DSN such as:

postgresql+psycopg2://ssa:ssa@db:5432/ssa_lab

2.4 QEMU / ARM Linux sensor node (concept)

Additionally, there is a conceptual QEMU / ARM Linux sensor node example under qemu-node/ that shows how a remote embedded device could send simple measurement payloads into the API:

  • sensor-agent.py – small Python script that sends JSON payloads to POST /measurements in periodic intervals.
  • sensor-agent.service – example systemd unit file to run the agent on boot on an embedded Linux system.
  • README-qemu-node.md – notes for integrating such an agent into a QEMU-based ARM image.

No specific organisation or military actor is referenced; the project is meant as a generic portfolio piece for SSA / orbital safety and distributed system design with Python and Docker.


3. Quickstart with Docker

3.1 Requirements

  • Docker and Docker Compose (or the docker compose subcommand)
  • Internet access for pulling the PostgreSQL base image

3.2 Start the stack

From the project root:

docker compose up --build

This will start three containers:

  • db – PostgreSQL database,
  • api – FastAPI service on port 8000,
  • worker – background analysis loop.

Once everything is up, open:

to explore the API using the automatically generated OpenAPI/Swagger UI.

You can also check basic health from the command line:

curl http://localhost:8000/health

Expected response:

{"status": "ok"}

4. Example Workflow

This section shows a typical end-to-end workflow using the /jobs and /conjunctions endpoints.

4.1 Submit an analysis job

Use POST /jobs with a JSON body similar to:

{
  "name": "Example-LEO-Scenario",
  "scenario": {
    "duration_hours": 1.0,
    "step_seconds": 60.0,
    "miss_distance_km": 500.0,
    "objects": [
      {
        "name": "SAT-X",
        "a_km": 6870,
        "e": 0.0,
        "inc_deg": 53.0,
        "raan_deg": 10.0,
        "argp_deg": 0.0,
        "ta_deg": 0.0
      },
      {
        "name": "SAT-Y",
        "a_km": 6870,
        "e": 0.0,
        "inc_deg": 53.0,
        "raan_deg": 10.5,
        "argp_deg": 0.0,
        "ta_deg": 5.0
      }
    ]
  }
}

The API will respond with a job object, for example:

{
  "id": 5,
  "name": "Example-LEO-Scenario",
  "status": "PENDING",
  "created_at": "2025-12-03T11:40:21.319019",
  "updated_at": "2025-12-03T11:40:21.319022"
}

4.2 Background processing

The worker container periodically polls the database. When it finds a job with status PENDING, it:

  1. Sets the status to RUNNING,
  2. Runs the conjunction analysis (see section 5),
  3. Writes all detected events into the conjunctions table,
  4. Sets the job status to DONE or FAILED.

You can observe this behaviour with:

docker compose logs -f worker

4.3 Inspect jobs and conjunctions

List all jobs:

curl http://localhost:8000/jobs

Get details for a specific job:

curl http://localhost:8000/jobs/5

Retrieve all conjunctions for that job:

curl http://localhost:8000/jobs/5/conjunctions

A successful job may return data like:

[
  {
    "id": 1,
    "obj1": "SAT-X",
    "obj2": "SAT-Y",
    "tca_seconds": 1800.0,
    "miss_distance_km": 120.3,
    "risk_score": 0.4,
    "risk_level": "LOW"
  }
]

If the list is empty ([]), the scenario did not produce any close approaches below the configured miss_distance_km threshold, which is also a valid and useful outcome.


5. Conjunction Analysis Model (Simplified)

The physics model in this lab is intentionally simple and is not meant for operational use. Its purpose is to show that you understand the basic ideas behind conjunction analysis and can implement them in code.

5.1 Scenario representation

Each job contains a JSON-encoded scenario with:

  • global parameters:
    • duration_hours – total propagation time,
    • step_seconds – time step for sampling,
    • miss_distance_km – distance threshold for “close approach”;
  • a list of orbital objects, each with:
    • name – identifier,
    • a_km – semi-major axis in kilometres,
    • e – eccentricity (here usually near zero),
    • inc_deg – inclination in degrees,
    • raan_deg – right ascension of ascending node,
    • argp_deg – argument of perigee,
    • ta_deg – true anomaly.

5.2 Propagation and distance computation

The code in ssa.py performs the following steps:

  1. Converts each object’s orbital elements into a simple 3D position vector in an Earth-centred inertial-like frame.
  2. Derives a constant angular rate for each object based on its orbital radius (very rough approximation using sqrt(mu / r^3)).
  3. Propagates the positions over time by rotating the vectors around the Earth’s axis.
  4. For each time step, computes pairwise distances between all objects.
  5. Tracks the minimum distance and time of closest approach (TCA) for each object pair.
  6. If the minimum distance is below miss_distance_km, emits a conjunction event with:
    • obj1, obj2 – object names,
    • tca_seconds – time since start of scenario,
    • miss_distance_km – closest distance in km,
    • risk_score – simple score derived from distance,
    • risk_levelLOW, MEDIUM or HIGH.

This structure is very similar to real SSA pipelines, but with a deliberately lightweight propagation model and no external ephemeris data.


6. Measurement Ingestion from Sensor Nodes

The /measurements endpoint allows you to simulate external sensors or ground stations that send additional context data into the system.

A typical request body looks like:

{
  "source_id": "sensor-001",
  "timestamp": "2025-12-03T11:45:00Z",
  "payload": {
    "az_deg": 180.0,
    "el_deg": 45.0,
    "snr_db": 32.5,
    "note": "synthetic measurement from example agent"
  }
}

The API stores the raw JSON payload in the database and returns a record with an internal ID and the stored timestamp. The QEMU / ARM example agent sends these payloads periodically as a small, self-contained service.


7. Local Development without Docker

If you prefer to run only the API on your local machine (for example using SQLite instead of PostgreSQL), you can adapt backend/app/db.py to your local settings and then:

cd backend
python -m venv .venv
source .venv/bin/activate  # on Windows: .venv\Scripts�ctivate
pip install -r requirements.txt

uvicorn app.main:app --reload

The distributed Docker setup is recommended for showcasing the full architecture in a portfolio context, but the code is structured so that you can also run and debug it in a standard Python virtual environment.


8. Security and Robustness Considerations

Even though this is a small lab project, several good practices are applied:

  • Separation of concerns – API, worker and database are separate services.
  • Container isolation – each component runs in its own Docker container.
  • Database credentials via environment variables – the DSN is injected as DB_DSN and not hard-coded in the code.
  • No external network exposure for the database – only the Docker network is used by default.
  • Explicit job statesPENDING, RUNNING, DONE, FAILED, which makes the processing status transparent and debuggable.

You can extend this further with authentication, HTTPS termination, message queues, structured logging or metrics endpoints if required.


9. Limitations and Possible Extensions

This lab is intentionally small and focused. Some obvious extensions are:

  • replace the simplified propagation with a more accurate orbital mechanics library or real TLE parsing,
  • add authentication and basic authorisation to the API,
  • provide a small web dashboard to visualise jobs and conjunction timelines,
  • ingest and correlate more advanced measurement payloads,
  • explore more sophisticated risk scoring models.

Even in its current form, the project demonstrates end-to-end handling of space-domain workloads: from scenario submission via REST to processed conjunction events in a relational database, including conceptual integration of remote sensor nodes.


10. Disclaimer

All scenarios, orbital parameters and measurements in this lab are synthetic and simplified. The physics model is intentionally approximate and is not intended for operational use. The goal is to demonstrate:

  • understanding of SSA / orbital conjunction concepts,
  • Python-based analysis pipelines,
  • containerised services with clear responsibilities,
  • a clean, well-documented project structure that can be shown in technical interviews and applications.

About

Verteiltes Testbed für Space Situational Awareness (SSA): Docker-basierte FastAPI-/Worker-Architektur zur Analyse von orbitalen Konjunktionsereignissen mit PostgreSQL-Backend und optionalem QEMU-Sensor-Node.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published