Sensors collect real-time data, which is then analyzed. Alerts (fire outbreaks in our case) are filtered to transmit only relevant information to stakeholders, enabling appropriate and rapid responses.
The system consists of several Scala microservices orchestrated with Docker:
- field-sensors: Generates simulated sensor data (temperature, humidity, CO2, O2) and publishes it to Kafka
- analyzer: Reads JSON data stored in MinIO via Spark Streaming, calculates time window averages and stores results in PostgreSQL
- alert-filter: Monitors real-time data to detect anomalies (fire risks) and generates alerts
- alert-handler: Processes and distributes alerts to appropriate stakeholders
- Kafka: Messaging system for inter-service communication
- MinIO: S3-compatible object storage for sensor data
- PostgreSQL: Database for storing analyses and alerts
- Kafka Connect: Connector to automatically transfer data from Kafka to MinIO
- Grafana: Data and alert visualization (optional)
- Docker and Docker Compose
- Java 11+ (for local development)
- sbt (for Scala build)
Create the environment file in the docker/ folder based on the
.env.example file already present in the folder.
For the following commands, I recommend running them in different terminals so you can see the logs from each component
- Create Docker images
cd docker/
docker compose build- Start infrastructure:
cd docker/
docker compose up postgres kafka zookeeper minio connectWait a bit for all infrastructure to be in place (about 30s).
- Launch applications:
cd docker/
docker compose up field-sensors analyzer alert-filter alert-handler- MinIO Console: http://localhost:9001 (minioadmin/minioadmin, by default)
- Grafana: http://localhost:3000 (admin/admin, by default)
Pay attention to the username and password you have defined in your .env.
├── field-sensors/ # Sensor data generator
├── analyzer/ # Spark analysis of MinIO data
├── alert-filter/ # Anomaly detection
├── alert-handler/ # Alert management
├── docker/ # Docker configuration and scripts
└── project/ # SBT configuration
Each service can be configured via environment variables defined in docker/.env or directly in docker-compose.yml.
If you want to have multiple fields with sensors that send data continuously, you can add them by copying this part (modifying the FIELD_ID field and service name):
field-sensors:
build:
context: ../
dockerfile: field-sensors/Dockerfile
container_name: field-sensors
depends_on:
- kafka
environment:
- KAFKA_BOOTSTRAP_SERVERS=kafka:9092
- FIELD_ID=1 # Example field ID, can be set via environment variable
restart: unless-stopped
networks:
- kafka-network- Services not starting: Check that the
.envfile exists indocker/ - Data not visible: Wait for complete initialization of Kafka connectors (1-2 minutes)
- Connection errors: Check that all infrastructure services are started before applications
cd docker/
docker compose down -v # Removes volumes and data
docker compose up -d