- create an .env file from the .env.example contents
- Build and start the Docker container:
this will restart the docker compose file and create the entire project including mysql-database & kafka-queue
or if you want to exec into a debug container and run the code
make docker-restart
make docker-dev-restart
-
Generate an API key from Webshare.
-
Copy the contents of
.env.example
to a new.env
file. -
Replace
<api key>
in the.env
file with your API key:PROXY_API_KEY="<api key>"
The Polylith architecture is a modular approach to organizing codebases, aimed at improving maintainability, reducing duplication, and providing better oversight of projects. It is particularly well-suited for managing large, complex applications.
- Reduce Duplication: With many repositories, schemas and functionalities are often replicated, leading to inconsistencies and maintenance challenges. Polylith consolidates shared code into reusable components.
- Improve Oversight: Managing multiple repositories can obscure the overall project structure. Polylith centralizes the architecture, making it easier to navigate and understand.
- Streamline Onboarding: New developers can quickly understand the project structure without needing to navigate numerous repositories.
For an in-depth guide on Polylith architecture, visit the Polylith Documentation.
Below are the essential commands for working with Polylith in your project.
A base serves as the foundation for your architecture, often containing shared logic or configurations.
uv run poly create base --name <base_name>
A component is a reusable, self-contained module that encapsulates specific functionality.
uv run poly create component --name <component_name>
A project is the entry point for your application, built using the base and components.
uv run poly create project --name <project_name>
flowchart TD
subgraph Ingestion
JavaPlugin(Java Plugin)
PublicAPI(Public API)
JavaPlugin --> PublicAPI
PublicAPI --> KafkaReports[/"Kafka: reports.to_insert"/]
end
subgraph Scheduling
TaskScheduler(Task Scheduler)
TaskScheduler --> KafkaToScrape[/"Kafka: players.to_scrape"/]
end
subgraph Scraping
KafkaToScrape --> HighscoreScraper(Highscore Scraper)
HighscoreScraper --> KafkaNotFound[/"Kafka: players.not_found"/]
HighscoreScraper --> KafkaScraped[/"Kafka: players.scraped"/]
KafkaNotFound --> RunemetricsScraper(Runemetrics Scraper)
RunemetricsScraper --> KafkaScraped
end
subgraph Processing
KafkaScraped --> HighscoreWorker(Highscore Worker)
HighscoreWorker --> KafkaForML[/"Kafka: players.to_score"/]
end
subgraph ML
KafkaForML --> MLServing(ML-Serving)
MLServing --> KafkaPredictions[/"Kafka: players.scored"/]
end
subgraph Storage
KafkaPredictions --> PredictionWorker(Prediction Worker)
KafkaReports --> ReportWorker(Report Worker)
HighscoreWorker --> MySQL[(MySQL)]
PredictionWorker --> MySQL[(MySQL)]
ReportWorker --> MySQL[(MySQL)]
end