diff --git a/docs/decisions/README.md b/docs/decisions/README.md index b3cf7bdd..207b8ca9 100644 --- a/docs/decisions/README.md +++ b/docs/decisions/README.md @@ -6,6 +6,7 @@ This directory contains architectural decision records for the Torrust Tracker D | Status | Date | Decision | Summary | | ------------- | ---------- | --------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ | +| ✅ Accepted | 2026-02-24 | [Docker Compose Local Validation Placement](./docker-compose-local-validation-placement.md) | Validate rendered docker-compose.yml in the infrastructure generator, not the app layer | | ✅ Accepted | 2026-02-17 | [Application-Layer Progress Reporting Trait](./application-layer-progress-reporting-trait.md) | Use trait-based progress reporting with Dependency Inversion for testable, reusable design | | ✅ Accepted | 2026-02-13 | [Concurrent Docker Image Builds in Tests](./concurrent-docker-image-builds-in-tests.md) | Treat "already exists" tagging error as success when parallel tests build same image | | ✅ Accepted | 2026-02-06 | [Agent Skills Content Strategy](./skill-content-strategy-duplication-vs-linking.md) | Three-tier content strategy: self-contained workflows, progressive disclosure, linked docs | diff --git a/docs/decisions/docker-compose-local-validation-placement.md b/docs/decisions/docker-compose-local-validation-placement.md new file mode 100644 index 00000000..616f34f8 --- /dev/null +++ b/docs/decisions/docker-compose-local-validation-placement.md @@ -0,0 +1,155 @@ +# Decision: Docker Compose Local Validation Placement in the Infrastructure Layer + +## Status + +Accepted + +## Date + +2026-02-24 + +## Context + +After rendering the `docker-compose.yml.tera` Tera template, the resulting +`docker-compose.yml` file may contain structural errors that Docker Compose will +reject at `run` time with a non-descriptive failure deep in the deployment +workflow. Issue [#382](../issues/382-docker-compose-template-empty-networks-key.md) +was a concrete example: an empty `networks:` key rendered when no optional +services were configured, which caused `docker compose up` to fail with +`services.tracker.networks must be a list` on the remote VM — far from the +point of origin. + +To catch such errors early and provide actionable feedback, a validation step was +added that runs `docker compose config --quiet` against the rendered output +directory immediately after template rendering, before any files are uploaded to +the VM. + +### The Question + +The rendering pipeline has multiple candidate locations for the validation call: + +```text +RenderDockerComposeTemplatesStep::execute() ← application/steps layer + └─ DockerComposeTemplateRenderingService::render() ← application service layer + └─ DockerComposeProjectGenerator::render() ← infrastructure layer ← chosen + ├─ EnvRenderer::render() ← writes .env + └─ DockerComposeRenderer::render() ← writes docker-compose.yml +``` + +Alternatives explicitly considered: + +1. **`DockerComposeRenderer::render()`** — the lowest-level renderer, inside + infrastructure, writes only `docker-compose.yml` +2. **`DockerComposeProjectGenerator::render()`** — the infrastructure-layer + generator, writes both `docker-compose.yml` and `.env` into the build + directory +3. **`DockerComposeTemplateRenderingService::render()`** — the application-layer + service that orchestrates the generator +4. **`RenderDockerComposeTemplatesStep::execute()`** — the application-layer step + invoked by the command handler + +### Constraints + +1. **DDD layering**: Infrastructure operations (spawning a child process, invoking a + CLI tool) must not leak upward into the application layer. The application layer + should remain free of concrete system-call dependencies. +2. **Complete artifact**: `docker compose config` reads variable substitutions from + the `.env` file in the same directory. Validating before `.env` exists produces + false failures or incomplete results. +3. **Single responsibility alignment**: `DockerComposeRenderer::render()` is + responsible for template rendering only; embedding a shell-command invocation + there mixes two distinct concerns inside an already-focused collaborator. + +## Decision + +Call `validate_docker_compose_file()` inside +**`DockerComposeProjectGenerator::render()`**, immediately after both +`EnvRenderer` and `DockerComposeRenderer` have written their output files. + +```rust +// project_generator.rs — after both render() calls succeed: +validate_docker_compose_file(&build_compose_dir) + .map_err(|source| DockerComposeProjectGeneratorError::DockerComposeValidationFailed { source })?; +``` + +This is the earliest point in the pipeline where: + +- the **complete artifact** (both `docker-compose.yml` and `.env`) is available, + making the validation meaningful and accurate, and +- the code is **still inside the infrastructure layer**, respecting the DDD + boundary that keeps process-spawning concerns out of the application layer. + +### Why not `DockerComposeRenderer::render()`? + +`DockerComposeRenderer` writes only `docker-compose.yml`. At that point, `.env` +does not exist yet. `docker compose config` depends on `.env` for variable +substitution, so validation would be incomplete or outright wrong for +configurations that use `.env` values. + +Validating inside a renderer whose responsibility is purely template-to-file +conversion also violates single responsibility. + +### Why not the application service or step? + +`DockerComposeTemplateRenderingService` and +`RenderDockerComposeTemplatesStep` live in the application layer. Embedding a +call to `validate_docker_compose_file()` — which spawns a `docker` child process +— there would: + +- introduce an OS-level, infrastructure dependency into the application layer, + violating DDD layering rules, +- make the application layer harder to test in isolation without a real `docker` + binary present. + +The application layer should orchestrate business flows; it should not know how +to invoke `docker compose`. + +## Consequences + +### Positive + +- Validation runs at the earliest correct moment: once and only once, as soon as + the full artifact exists. +- The infrastructure layer encapsulates all `docker` CLI concerns; the application + layer remains free of process-spawning logic. +- `DockerComposeProjectGeneratorError` gains a typed `DockerComposeValidationFailed` + variant, surfacing the exact `docker compose config` output as an actionable + error message to the user at `configure` time. +- Future template changes that introduce structural errors are caught immediately + during development without needing a full deployment cycle. + +### Negative / Trade-offs + +- `DockerComposeProjectGenerator::render()` is now responsible for more than + pure rendering — it also validates. This is a deliberate widening of scope: + the generator is the "assembly" step that produces the complete artifact, and + post-assembly validation is a natural responsibility for the assembler. +- Validation requires `docker` to be installed on the machine running the + deployer. This is already a project dependency (Docker is listed as a required + tool), so it adds no new requirement but does make the dependency explicit in + the infrastructure code. + +## Alternatives Considered + +| Location | Rejected Reason | +| ------------------------------------------------- | ------------------------------------------------------------------------ | +| `DockerComposeRenderer::render()` | Artifact incomplete (no `.env`); single-responsibility violation | +| `DockerComposeTemplateRenderingService::render()` | Application layer calling infrastructure (process-spawning) violates DDD | +| `RenderDockerComposeTemplatesStep::execute()` | Same DDD violation; too high up the call stack for an infra concern | + +## Related Decisions + +- [Tera Minimal Templating Strategy](./tera-minimal-templating-strategy.md) — + philosophy behind how templates are structured and what they are responsible for +- [Actionable Error Messages](./actionable-error-messages.md) — + pattern used by `DockerComposeLocalValidationError::help()` to surface + clear, solution-oriented errors + +## References + +- Issue [#382](../issues/382-docker-compose-template-empty-networks-key.md) — + the concrete bug that motivated this validation step +- `src/infrastructure/templating/docker_compose/local_validator.rs` — + implementation of `validate_docker_compose_file()` +- `src/infrastructure/templating/docker_compose/template/renderer/project_generator.rs` — + call site within `DockerComposeProjectGenerator::render()` diff --git a/docs/issues/382-docker-compose-template-empty-networks-key.md b/docs/issues/382-docker-compose-template-empty-networks-key.md index cb9d4fad..8a5389e7 100644 --- a/docs/issues/382-docker-compose-template-empty-networks-key.md +++ b/docs/issues/382-docker-compose-template-empty-networks-key.md @@ -21,10 +21,10 @@ that catches this invalid template output.** ## Goals -- [ ] Fix the template to conditionally render the `networks:` key only when networks exist -- [ ] Ensure consistency across all service blocks in the template -- [ ] Add a unit test that renders the template with a minimal config (no optional services) and validates the output -- [ ] Validate rendered docker-compose.yml with `docker compose config --quiet` after template rendering to fail early +- [x] Fix the template to conditionally render the `networks:` key only when networks exist +- [x] Ensure consistency across all service blocks in the template +- [x] Add a unit test that renders the template with a minimal config (no optional services) and validates the output +- [x] Validate rendered docker-compose.yml with `docker compose config --quiet` after template rendering to fail early ## Root Cause Analysis @@ -94,14 +94,14 @@ Result: `tracker.networks` is an empty `Vec`, and the template renders an invali The tracker is the **only service that is always present AND can have zero networks**: -| Service | Always Present? | Can Have Zero Networks? | Affected? | -| ---------- | ----------------- | ------------------------ | ------------ | -| Tracker | Yes | Yes | **Yes** | -| Caddy | No (needs domain) | No (always has proxy) | No | -| Prometheus | No (needs domain) | No (always has metrics) | No | -| Grafana | No (needs domain) | No (always has metrics) | No | -| MySQL | No (needs mysql) | No (always has database) | No | -| Backup | Yes | Yes | No (guarded) | +| Service | Always Present? | Can Have Zero Networks? | Affected? | +| ---------- | ----------------- | ------------------------ | --------------- | +| Tracker | Yes | Yes | Fixed (guarded) | +| Caddy | No (needs domain) | No (always has proxy) | Fixed (guarded) | +| Prometheus | No (needs domain) | No (always has metrics) | Fixed (guarded) | +| Grafana | No (needs domain) | No (always has metrics) | Fixed (guarded) | +| MySQL | No (needs mysql) | No (always has database) | Fixed (guarded) | +| Backup | Yes | Yes | No (guarded) | The **backup** service correctly handles this case with a guard (line 238): @@ -200,9 +200,9 @@ validated until Docker Compose processes it. ### Phase 1: Fix the Template -- [ ] Add a conditional guard around the tracker's `networks:` block in - `templates/docker-compose/docker-compose.yml.tera` (lines 103-106), matching - the pattern already used by the backup service: +- [x] Add a conditional guard around the tracker's `networks:` block in + `templates/docker-compose/docker-compose.yml.tera`, matching the pattern + already used by the backup service: ```tera {%- if tracker.networks | length > 0 %} @@ -213,17 +213,19 @@ validated until Docker Compose processes it. {%- endif %} ``` -- [ ] Audit all other service blocks in the template for the same pattern. - Currently, `caddy` (line 80) also renders `networks:` unconditionally, but is - safe because caddy always has at least the proxy network when enabled. Add a - guard anyway for defensive coding. +- [x] Audit all other service blocks in the template for the same pattern. + Guards added for all five service blocks: `tracker`, `caddy`, `prometheus`, + `grafana`, and `mysql`. ### Phase 2: Add Test Coverage -- [ ] Add a unit test that renders the docker-compose template with a minimal - config (SQLite, no domains, no Prometheus) and verifies the output is valid YAML -- [ ] Add a unit test that verifies the rendered output does not contain empty - `networks:` keys for any service +- [x] Add a unit test that renders the docker-compose template with a minimal + config (SQLite, no domains, no Prometheus) and verifies the output does not + contain an empty `networks:` key + (`it_should_not_render_empty_networks_key_for_tracker_when_no_optional_services_are_configured`) +- [x] Add a unit test that verifies the rendered output contains the correct + `networks:` key when Prometheus is enabled + (`it_should_render_networks_key_for_tracker_when_prometheus_is_enabled`) ### Phase 3: Validate Rendered docker-compose.yml After Rendering @@ -248,14 +250,19 @@ descriptive error on failure. **Implementation approach:** -- [ ] After template rendering in the `configure` step (where docker-compose.yml +- [x] After template rendering in the `configure` step (where docker-compose.yml is generated), run `docker compose config --quiet` on the output file -- [ ] If validation fails, report a clear error to the user indicating the + (`validate_docker_compose_file()` in `local_validator.rs`, called from + `DockerComposeProjectGenerator::render()`) +- [x] If validation fails, report a clear error to the user indicating the rendered template is invalid, including the docker-compose error output -- [ ] This validation should run locally against the `build//docker-compose/` + (`DockerComposeValidationFailed` error variant with `help()` message) +- [x] This validation should run locally against the `build//docker-compose/` directory before files are uploaded to the VM -- [ ] Requires `docker` to be available on the machine running the deployer (it +- [x] Requires `docker` to be available on the machine running the deployer (it already is, since Docker is a project dependency) +- [x] Add a unit/integration test covering the validation step + (4 tests in `src/infrastructure/templating/docker_compose/local_validator.rs`) **Benefits:** @@ -266,22 +273,25 @@ descriptive error on failure. ### Phase 4: Documentation -- [ ] Update the template documentation comments if needed +- [x] Update the template documentation comments if needed (no changes required; + guards are self-documenting) ## Acceptance Criteria **Quality Checks**: -- [ ] Pre-commit checks pass: `./scripts/pre-commit.sh` +- [x] Pre-commit checks pass: `./scripts/pre-commit.sh` **Task-Specific Criteria**: -- [ ] Minimal config (SQLite, no domains, no Prometheus) produces valid docker-compose.yml -- [ ] The `run` command succeeds with the configuration that previously failed -- [ ] Template guards are consistent across all service blocks -- [ ] Unit test covers the minimal-config rendering scenario -- [ ] Rendered docker-compose.yml is validated with `docker compose config --quiet` after template rendering -- [ ] Invalid templates produce a clear, actionable error at `configure` time (not `run` time) +- [x] Minimal config (SQLite, no domains, no Prometheus) produces valid docker-compose.yml +- [x] Template guards are consistent across all service blocks +- [x] Unit test covers the minimal-config rendering scenario +- [x] Rendered docker-compose.yml is validated with `docker compose config --quiet` after template rendering +- [x] Invalid templates produce a clear, actionable error at `configure` time (not `run` time) +- [x] The `run` command succeeds with the configuration that previously failed + - Verified with `envs/minimal-fix-test.json` (SQLite, no domains, no Prometheus): + `create → provision → configure → release → run → test` all passed ✅ ## Related Documentation @@ -289,13 +299,14 @@ descriptive error on failure. - Network derivation: `src/domain/tracker/config/mod.rs` (lines 560-585) - Network topology: `src/domain/topology/network.rs` - Template context builder: `src/infrastructure/templating/docker_compose/template/wrappers/docker_compose/context/builder.rs` +- Local validator: `src/infrastructure/templating/docker_compose/local_validator.rs` +- ADR: [Docker Compose Local Validation Placement](../decisions/docker-compose-local-validation-placement.md) ## Notes -- The `caddy` service block (line 80) also renders `networks:` unconditionally but - is currently safe because caddy is only included when domains are configured, and - caddy always has the proxy network. Adding a guard there too would be defensive - but recommended for consistency. +- All service `networks:` blocks (`tracker`, `caddy`, `prometheus`, `grafana`, + `mysql`) are now guarded with `{%- if .networks | length > 0 %}`, + matching the pre-existing `backup` guard. The template is now consistent. - This bug affects any "minimal" deployment configuration without optional services. As the project adds more minimal deployment examples, this will become a more common failure mode. diff --git a/src/infrastructure/templating/docker_compose/local_validator.rs b/src/infrastructure/templating/docker_compose/local_validator.rs new file mode 100644 index 00000000..061d7f21 --- /dev/null +++ b/src/infrastructure/templating/docker_compose/local_validator.rs @@ -0,0 +1,244 @@ +//! Local `docker compose config` validator +//! +//! This module validates a rendered `docker-compose.yml` by running +//! `docker compose config --quiet` locally **before** the file is uploaded to +//! the remote VM. Catching structural errors at render time provides a faster +//! failure loop and a more actionable error message than waiting for Docker +//! Compose to reject the file at `run` time. +//! +//! ## Why validate locally? +//! +//! `docker compose config --quiet` validates the file structure (YAML syntax, +//! required field types, etc.) without starting any services. It exits with +//! code 0 on success and a non-zero exit code with a descriptive message on +//! failure. This is a cheap, dependency-free check — `docker` is already a +//! project requirement. +//! +//! ## Example failure caught by this validator +//! +//! An empty `networks:` key (no list items) produces: +//! +//! ```console +//! $ docker compose config --quiet +//! services.tracker.networks must be a list +//! ``` +//! +//! ## Usage +//! +//! ```rust,ignore +//! use std::path::Path; +//! use torrust_tracker_deployer_lib::infrastructure::templating::docker_compose::local_validator::validate_docker_compose_file; +//! +//! validate_docker_compose_file(Path::new("build/my-env/docker-compose")).unwrap(); +//! ``` + +use std::path::Path; +use std::process::Command; + +use thiserror::Error; + +/// Errors that can occur when validating a rendered `docker-compose.yml` +#[derive(Error, Debug)] +pub enum DockerComposeLocalValidationError { + /// The `docker` binary could not be executed (not installed or not in PATH) + #[error( + "Failed to run 'docker compose config --quiet' — is Docker installed and in PATH?\n\ + Source: {source}" + )] + CommandExecutionFailed { + /// The underlying OS error + #[source] + source: std::io::Error, + }, + + /// `docker compose config` ran but reported validation errors + #[error( + "Rendered docker-compose.yml failed validation.\n\ + Fix the template or environment configuration and re-render.\n\ + Docker Compose error:\n{output}" + )] + InvalidDockerComposeFile { + /// Combined stdout+stderr from `docker compose config` + output: String, + }, +} + +impl DockerComposeLocalValidationError { + /// Returns a help message with steps the user can take to resolve the issue + #[must_use] + pub fn help(&self) -> String { + match self { + Self::CommandExecutionFailed { .. } => { + "Install Docker and ensure it is added to your PATH, then re-run the command." + .to_string() + } + Self::InvalidDockerComposeFile { .. } => { + "Review the docker-compose.yml in your build directory.\n\ + Common causes:\n\ + - A service has an empty `networks:` key (no networks assigned)\n\ + - A required field is missing or has the wrong type\n\ + Check the error output above for the exact location of the problem." + .to_string() + } + } + } +} + +/// Validates a rendered `docker-compose.yml` by running `docker compose config --quiet` +/// +/// The command is executed in `compose_dir` so that it picks up the +/// `docker-compose.yml` (and optionally `.env`) that live there. +/// +/// # Arguments +/// +/// * `compose_dir` — directory containing the rendered `docker-compose.yml` +/// +/// # Errors +/// +/// - [`DockerComposeLocalValidationError::CommandExecutionFailed`] if `docker` +/// cannot be executed (not installed, not in PATH, OS error, …) +/// - [`DockerComposeLocalValidationError::InvalidDockerComposeFile`] if the +/// file fails structural validation +/// +/// # Example +/// +/// ```rust,ignore +/// validate_docker_compose_file(Path::new("build/my-env/docker-compose"))?; +/// ``` +pub fn validate_docker_compose_file( + compose_dir: &Path, +) -> Result<(), DockerComposeLocalValidationError> { + tracing::debug!( + compose_dir = %compose_dir.display(), + "Validating rendered docker-compose.yml with 'docker compose config --quiet'" + ); + + let output = Command::new("docker") + .args(["compose", "config", "--quiet"]) + .current_dir(compose_dir) + .output() + .map_err(|source| DockerComposeLocalValidationError::CommandExecutionFailed { source })?; + + if output.status.success() { + tracing::debug!( + compose_dir = %compose_dir.display(), + "docker-compose.yml passed local validation" + ); + return Ok(()); + } + + // Collect any output produced by docker compose (errors may appear on + // stdout or stderr depending on the Docker version). + let stderr = String::from_utf8_lossy(&output.stderr).trim().to_string(); + let stdout = String::from_utf8_lossy(&output.stdout).trim().to_string(); + let combined = match (stderr.is_empty(), stdout.is_empty()) { + (false, _) => stderr, + (true, false) => stdout, + (true, true) => format!( + "docker compose exited with code {}", + output.status.code().unwrap_or(-1) + ), + }; + + tracing::warn!( + compose_dir = %compose_dir.display(), + error = %combined, + "docker-compose.yml failed local validation" + ); + + Err(DockerComposeLocalValidationError::InvalidDockerComposeFile { output: combined }) +} + +#[cfg(test)] +mod tests { + use std::fs; + + use tempfile::TempDir; + + use super::*; + + fn write_compose_file(dir: &TempDir, content: &str) { + fs::write(dir.path().join("docker-compose.yml"), content) + .expect("Failed to write docker-compose.yml"); + } + + #[test] + fn it_should_pass_validation_for_a_valid_docker_compose_file() { + let dir = TempDir::new().expect("Failed to create temp dir"); + write_compose_file( + &dir, + r" +services: + tracker: + image: torrust/tracker:develop + container_name: tracker + networks: + - metrics_network + +networks: + metrics_network: + driver: bridge +", + ); + + let result = validate_docker_compose_file(dir.path()); + assert!( + result.is_ok(), + "Valid docker-compose.yml should pass validation; error: {result:?}" + ); + } + + #[test] + fn it_should_fail_validation_for_a_docker_compose_file_with_empty_networks_key() { + let dir = TempDir::new().expect("Failed to create temp dir"); + // This is the exact invalid output the template used to produce: + // an empty `networks:` key followed by another key. + write_compose_file( + &dir, + r" +services: + tracker: + image: torrust/tracker:develop + container_name: tracker + networks: + ports: + - '6969:6969/udp' +", + ); + + let result = validate_docker_compose_file(dir.path()); + assert!( + matches!( + result, + Err(DockerComposeLocalValidationError::InvalidDockerComposeFile { .. }) + ), + "Empty networks: key should fail validation; got: {result:?}" + ); + } + + #[test] + fn it_should_return_help_message_for_invalid_file_error() { + let error = DockerComposeLocalValidationError::InvalidDockerComposeFile { + output: "services.tracker.networks must be a list".to_string(), + }; + let help = error.help(); + assert!(!help.is_empty(), "Help message must not be empty"); + assert!( + help.contains("networks"), + "Help should mention the networks key" + ); + } + + #[test] + fn it_should_return_help_message_for_command_execution_failed() { + let error = DockerComposeLocalValidationError::CommandExecutionFailed { + source: std::io::Error::new(std::io::ErrorKind::NotFound, "not found"), + }; + let help = error.help(); + assert!(!help.is_empty(), "Help message must not be empty"); + assert!( + help.contains("Docker"), + "Help should mention Docker installation" + ); + } +} diff --git a/src/infrastructure/templating/docker_compose/mod.rs b/src/infrastructure/templating/docker_compose/mod.rs index 6f83359f..0aed1170 100644 --- a/src/infrastructure/templating/docker_compose/mod.rs +++ b/src/infrastructure/templating/docker_compose/mod.rs @@ -6,11 +6,13 @@ //! ## Components //! //! - `template` - Template rendering functionality for Docker Compose files +//! - `local_validator` - Local `docker compose config` validation after rendering //! //! Note: Unlike Ansible and Tofu, Docker Compose currently only uses static templates //! (no Tera variable substitution). If dynamic templates are needed in the future, //! the template module can be extended similar to Ansible. +pub mod local_validator; pub mod template; pub use template::{DockerComposeProjectGenerator, DockerComposeProjectGeneratorError}; diff --git a/src/infrastructure/templating/docker_compose/template/renderer/project_generator.rs b/src/infrastructure/templating/docker_compose/template/renderer/project_generator.rs index 872b0ec3..c0be0dbb 100644 --- a/src/infrastructure/templating/docker_compose/template/renderer/project_generator.rs +++ b/src/infrastructure/templating/docker_compose/template/renderer/project_generator.rs @@ -16,6 +16,9 @@ use std::sync::Arc; use thiserror::Error; use crate::domain::template::TemplateManager; +use crate::infrastructure::templating::docker_compose::local_validator::{ + validate_docker_compose_file, DockerComposeLocalValidationError, +}; use crate::infrastructure::templating::docker_compose::template::renderer::docker_compose::{ DockerComposeRenderer, DockerComposeRendererError, }; @@ -49,6 +52,13 @@ pub enum DockerComposeProjectGeneratorError { #[source] source: DockerComposeRendererError, }, + + /// Rendered docker-compose.yml failed `docker compose config --quiet` validation + #[error("Rendered docker-compose.yml failed local validation: {source}")] + DockerComposeValidationFailed { + #[source] + source: DockerComposeLocalValidationError, + }, } /// Renders Docker Compose templates to a build directory @@ -133,6 +143,14 @@ impl DockerComposeProjectGenerator { }, )?; + // Validate the rendered docker-compose.yml locally before uploading to the VM. + // This catches structural errors (e.g. empty `networks:` key) at render time, + // providing a fast failure with a clear error message rather than waiting for + // Docker Compose to reject the file at `run` time on the remote host. + validate_docker_compose_file(&build_compose_dir).map_err(|source| { + DockerComposeProjectGeneratorError::DockerComposeValidationFailed { source } + })?; + tracing::debug!( template_type = "docker_compose", output_dir = %build_compose_dir.display(), diff --git a/src/infrastructure/templating/docker_compose/template/wrappers/docker_compose/template.rs b/src/infrastructure/templating/docker_compose/template/wrappers/docker_compose/template.rs index dd5e55fe..1d77a709 100644 --- a/src/infrastructure/templating/docker_compose/template/wrappers/docker_compose/template.rs +++ b/src/infrastructure/templating/docker_compose/template/wrappers/docker_compose/template.rs @@ -223,4 +223,98 @@ services: assert!(result.is_err()); } + + // Template snippet that mirrors the tracker service `networks:` block from the real template. + // Used to verify that the conditional guard prevents an empty `networks:` key. + // Only covers the environment → networks → ports transition, which is the exact + // sequence that previously produced invalid YAML. + const TRACKER_NETWORKS_TEMPLATE: &str = r#"services: + tracker: + environment: + - USER_ID=1000 +{%- if tracker.networks | length > 0 %} + networks: +{%- for network in tracker.networks %} + - {{ network }} +{%- endfor %} +{%- endif %} +{%- if tracker.ports | length > 0 %} + ports: +{%- for port in tracker.ports %} + - "{{ port.binding }}" +{%- endfor %} +{%- endif %} +"#; + + #[test] + fn it_should_not_render_empty_networks_key_for_tracker_when_no_optional_services_are_configured( + ) { + // Arrange: minimal config — SQLite, no Prometheus, no Caddy, no MySQL. + // tracker.networks will be an empty Vec under these conditions. + let template_file = File::new( + "docker-compose.yml.tera", + TRACKER_NETWORKS_TEMPLATE.to_string(), + ) + .unwrap(); + + let tracker = test_tracker_config(); + let context = DockerComposeContext::builder(tracker).build(); + + // Act + let template = DockerComposeTemplate::new(&template_file, context).unwrap(); + + // Assert: an empty `networks:` key must not appear in the output. + // Before the fix this rendered: + // networks: + // ports: + // which is invalid YAML rejected by Docker Compose. + assert!( + !template.content.contains(" networks:"), + "Empty `networks:` key must not be rendered when tracker has no networks; \ + actual output:\n{}", + template.content + ); + } + + #[test] + fn it_should_render_networks_key_for_tracker_when_prometheus_is_enabled() { + use std::num::NonZeroU32; + + use crate::domain::prometheus::PrometheusConfig; + + // Arrange: Prometheus enabled → tracker gets the metrics network. + let template_file = File::new( + "docker-compose.yml.tera", + TRACKER_NETWORKS_TEMPLATE.to_string(), + ) + .unwrap(); + + let tracker = { + let domain_config = test_domain_tracker_config(); + let enabled = EnabledServices::from(&[crate::domain::topology::Service::Prometheus]); + TrackerServiceContext::from_domain_config(&domain_config, &enabled) + }; + let prometheus_config = + PrometheusConfig::new(NonZeroU32::new(15).expect("non-zero scrape interval")); + let context = DockerComposeContext::builder(tracker) + .with_prometheus(prometheus_config) + .build(); + + // Act + let template = DockerComposeTemplate::new(&template_file, context).unwrap(); + + // Assert: `networks:` must appear and list the metrics network. + assert!( + template.content.contains(" networks:"), + "`networks:` key must be present when tracker has networks; \ + actual output:\n{}", + template.content + ); + assert!( + template.content.contains("metrics_network"), + "metrics_network must appear in tracker networks; \ + actual output:\n{}", + template.content + ); + } } diff --git a/templates/docker-compose/docker-compose.yml.tera b/templates/docker-compose/docker-compose.yml.tera index 9cbfcba1..450ec10a 100644 --- a/templates/docker-compose/docker-compose.yml.tera +++ b/templates/docker-compose/docker-compose.yml.tera @@ -70,10 +70,12 @@ services: - ./storage/caddy/etc/Caddyfile:/etc/caddy/Caddyfile:ro - ./storage/caddy/data:/data # TLS certificates (MUST persist!) - ./storage/caddy/config:/config +{%- if caddy.networks | length > 0 %} networks: {%- for network in caddy.networks %} - {{ network }} {%- endfor %} +{%- endif %} healthcheck: test: ["CMD", "caddy", "validate", "--config", "/etc/caddy/Caddyfile"] interval: 10s @@ -100,10 +102,12 @@ services: - TORRUST_TRACKER_CONFIG_OVERRIDE_CORE__DATABASE__DRIVER=${TORRUST_TRACKER_CONFIG_OVERRIDE_CORE__DATABASE__DRIVER} - TORRUST_TRACKER_CONFIG_TOML_PATH=${TORRUST_TRACKER_CONFIG_TOML_PATH} - TORRUST_TRACKER_CONFIG_OVERRIDE_HTTP_API__ACCESS_TOKENS__ADMIN=${TORRUST_TRACKER_CONFIG_OVERRIDE_HTTP_API__ACCESS_TOKENS__ADMIN} +{%- if tracker.networks | length > 0 %} networks: {%- for network in tracker.networks %} - {{ network }} {%- endfor %} +{%- endif %} {%- if tracker.ports | length > 0 %} ports: {%- for port in tracker.ports %} @@ -121,10 +125,12 @@ services: <<: *defaults image: prom/prometheus:v3.5.0 container_name: prometheus +{%- if prometheus.networks | length > 0 %} networks: {%- for network in prometheus.networks %} - {{ network }} {%- endfor %} +{%- endif %} {%- if prometheus.ports | length > 0 %} ports: {%- for port in prometheus.ports %} @@ -151,10 +157,12 @@ services: <<: *defaults image: grafana/grafana:12.3.1 container_name: grafana +{%- if grafana.networks | length > 0 %} networks: {%- for network in grafana.networks %} - {{ network }} {%- endfor %} +{%- endif %} {%- if grafana.ports | length > 0 %} ports: {%- for port in grafana.ports %} @@ -189,10 +197,12 @@ services: - MYSQL_DATABASE=${MYSQL_DATABASE} - MYSQL_USER=${MYSQL_USER} - MYSQL_PASSWORD=${MYSQL_PASSWORD} +{%- if mysql.networks | length > 0 %} networks: {%- for network in mysql.networks %} - {{ network }} {%- endfor %} +{%- endif %} # SECURITY: MySQL port is NOT exposed to the host/external network. # - Only the tracker container can access MySQL via Docker's internal database_network # - The healthcheck runs inside the container, so no external port is needed