Skip to content

Latest commit

 

History

History
357 lines (276 loc) · 28.4 KB

File metadata and controls

357 lines (276 loc) · 28.4 KB

Torrust Tracker Deployer - Roadmap

GitHub Issue: #1 - Roadmap

This document outlines the development roadmap for the Torrust Tracker Deployer project. Each task is marked with:

  • [ ] - Not completed
  • [x] - Completed

Development Process

When starting work on a new feature:

  1. Create the feature documentation in the docs/features/ folder and commit it
  2. Open an issue on GitHub linking to the feature folder in the repository
  3. Add the new issue as a child issue of the main EPIC issue

Note: See docs/features/README.md for detailed conventions and process guide for creating new features.


Roadmap

1. Add scaffolding for main app

Epic Issue: #2 - Scaffolding for main app

  • 1.1 Setup logging - Issue #3 ✅ Completed
    • Setup logging for production CLI - PR #4
    • Remove ANSI codes from file logging - Issue #5, PR #7
  • 1.2 Create command torrust-tracker-deployer destroy to destroy an environment ✅ Completed
  • 1.3 Refactor extract shared code between testing and production for app bootstrapping ✅ Completed
  • 1.4 Improve command to use better abstraction to handle presentation layer ✅ Completed
    • User output architecture improvements implemented
    • Epic #102 completed
    • Message trait system, sink abstraction, and theme support added
    • Folder module structure with focused submodules
  • 1.5 Create command torrust-tracker-deployer create to create a new environment ✅ Completed
  • 1.6 Create command torrust-tracker-deployer provision to provision VM infrastructure (UI layer only) ✅ Completed - Issue #174
    • Note: The App layer ProvisionCommand is already implemented, this task focuses on the console subcommand interface
    • Implementation should call the existing ProvisionCommand business logic
    • Handle user input, validation, and output presentation
  • 1.7 Create command torrust-tracker-deployer configure to configure provisioned infrastructure (UI layer only) ✅ Completed - Issue #180
    • Note: The App layer ConfigureCommand is already implemented, this task focuses on the console subcommand interface
    • Implementation should call the existing ConfigureCommandHandler business logic
    • Handle user input, validation, and output presentation
    • Enables transition from "provisioned" to "configured" state via CLI
  • 1.8 Create command torrust-tracker-deployer test to verify deployment infrastructure (UI layer only) ✅ Completed - Issue #188
    • Note: The App layer TestCommandHandler is already implemented, this task focuses on the console subcommand interface
    • Implementation should call the existing TestCommandHandler business logic
    • Handle user input, validation, and output presentation
    • Enables verification of deployment state via CLI (cloud-init, Docker, Docker Compose)

Note: See docs/research/UX/ for detailed UX research that will be useful to implement the features in this section.

Future Enhancement: The torrust-tracker-deployer deploy porcelain command (intelligent orchestration of plumbing commands) will be implemented after the core plumbing commands are stable. See docs/features/hybrid-command-architecture/ for the complete specification.

2. Add new infrastructure provider: Hetzner

Epic Issue: #205 - Add Hetzner Provider Support ✅ Completed

  • 2.1 Add Hetzner provider support (Phase 1: Make LXD Explicit) ✅ Completed
    • 2.1.1 Add Provider enum and ProviderConfig types - Issue #206 ✅ Completed
    • 2.1.2 Update UserInputs to use ProviderConfig - Issue #207 ✅ Completed
    • 2.1.3 Update EnvironmentCreationConfig DTO - Issue #208 ✅ Completed
    • 2.1.4 Parameterize TofuTemplateRenderer by provider - Issue #212 ✅ Completed
    • 2.1.5 Update environment JSON files and E2E tests ✅ Completed (part of #212)
    • 2.1.6 Update user documentation - Issue #214 ✅ Completed
  • 2.2 Add Hetzner provider support (Phase 2: Add Hetzner) ✅ Completed
    • Hetzner OpenTofu templates implemented
    • Full deployment workflow tested with Hetzner Cloud

3. Continue adding more application commands

Note: These are internal app layer commands (like ProvisionCommand or ConfigureCommand), not console commands. The approach is to slice by functional services rather than deployment stages - we fully deploy a working stack from the beginning and incrementally add new services.

  • 3.1 Finish ConfigureCommand ✅ Completed - Epic #16

    • System security configuration added (automatic updates, UFW firewall)
    • Ansible templates refactored to centralized variables pattern
    • Tasks completed: #17, #18, #19
  • 3.2 Implement ReleaseCommand and RunCommand with vertical slices - Epic #216

    Strategy: Build incrementally with working deployments at each step. Each slice adds a new service to the docker-compose stack.

    • 3.2.1 Hello World slice (scaffolding) - Issue #217 ✅ Completed
      • Create release and run commands with minimal docker-compose template
      • Deploy and run a simple hello-world container to validate the full pipeline
    • 3.2.2 Torrust Tracker slice - Issue #220 ✅ Completed
      • Replace hello-world with Torrust Tracker service
      • Add tracker configuration template (start with hardcoded defaults, then progressively expose configuration options)
    • 3.2.3 MySQL slice - Issue #232 ✅ Completed
      • Add MySQL service to docker-compose stack
      • Allow user to choose between SQLite and MySQL in environment config
    • 3.2.4 Prometheus slice - Issue #238 ✅ Completed
      • Add Prometheus service for metrics collection
    • 3.2.5 Grafana slice - Issue #246 ✅ Completed
      • Add Grafana service for metrics visualization

    Notes:

    • Each slice delivers a working deployment
    • Configuration complexity grows incrementally (hardcoded → environment config → full flexibility)
    • Detailed implementation tasks will be defined in EPIC issues

4. Create a docker image for the deployer

  • 4.1 Create docker image for the deployer to use it without needing to install the dependencies (OpenTofu, Ansible, etc) - Issue #264 ✅ Completed
    • Docker image published to Docker Hub
    • CI/CD workflow for automated builds
    • Security scanning with Trivy

5. Add extra console app commands

  • 5.1 torrust-tracker-deployer show - Display environment information and current state - Issue #241 ✅ Completed
  • 5.2 torrust-tracker-deployer test - Run application tests ✅ Completed
  • 5.3 torrust-tracker-deployer list - List environments or deployments - Issue #260 ✅ Completed

Note: The test console subcommand is already partially implemented. The show command displays stored environment data (read-only, no remote verification). A future status command may be added for service health checks.

6. Add HTTPS support ✅ COMPLETED

  • 6.1 Add HTTPS support with Caddy for all HTTP services - Issue #272 ✅ Completed
    • Implemented Caddy TLS termination proxy
    • Added HTTPS support for HTTP tracker
    • Added HTTPS support for tracker API
    • Added HTTPS support for Grafana
    • Research Complete: Issue #270 - Caddy evaluation successful, production deployment verified

7. Add backup support ✅ COMPLETED

Epic Issue: #309 - Add backup support

  • 7.1 Research database backup strategies - Issue #310 ✅ Completed
    • Investigated SQLite and MySQL backup approaches
    • Recommended maintenance-window hybrid approach (container + crontab)
    • Built and tested POC with 58 unit tests
    • Documented findings in docs/research/backup-strategies/
  • 7.2 Implement backup support - Issue #315 ✅ Completed
    • Added backup container templates (Dockerfile, backup.sh) - Published to Docker Hub
    • Added backup service to Docker Compose template with profile-based enablement
    • Extended environment configuration schema with backup settings
    • Deployed backup artifacts via Ansible playbooks
    • Installed crontab for scheduled maintenance-window backups (3 AM daily)
    • Supports: MySQL dumps, SQLite file copy, config archives
    • Backup retention cleanup (configurable days, default 7)
    • Note: Volume management is out of scope - user provides a mounted location
    • Implementation Details: Phase 1-4 completed (container, service integration, crontab scheduling, documentation)

8. Add levels of verbosity ✅ COMPLETED

Epic Issue: #362 - Add levels of verbosity

Add graduated verbosity levels (-v, -vv, -vvv) to the most complex and time-consuming commands to give users control over the amount of progress detail displayed during operations.

  • 8.1 Add verbosity levels to provision command ✅ Completed - PR #361
    • Four graduated levels: Normal (default), Verbose (-v), VeryVerbose (-vv), Debug (-vvv)
    • CommandProgressListener trait for application-layer progress reporting
    • Comprehensive user guide documentation
    • See docs/research/UX/ for UX research
  • 8.2 Add verbosity levels to configure command ✅ Completed - Issue #363, PR #364
    • Applied same verbosity pattern established in provision command
    • Shows configuration steps (Docker, Docker Compose, Security Updates, Firewall) at different detail levels
    • Reuses CommandProgressListener infrastructure
    • Comprehensive user guide documentation with live-tested examples
  • 8.3 Add verbosity levels to release command ✅ Completed - Issue #367, PR #368
    • Applied same verbosity pattern established in provision command
    • Shows all 7 service-specific release steps at different detail levels
    • Reuses CommandProgressListener infrastructure
    • Comprehensive user guide documentation with verified output examples

Note: Focus on the three most complex and time-consuming commands (provision, configure, release). Other commands may be enhanced with verbosity levels based on user demand.

9. Extend deployer usability

Add new commands to allow users to take advantage of the deployer even if they do not want to use all functionalities. This enables partial adoption of the tool.

These commands complete a trilogy of "lightweight" entry points:

  • register - For users with pre-provisioned instances
  • validate - For users who only want to validate a deployment configuration
  • render - For users who only want to build artifacts and handle deployment manually

This makes the deployer more versatile for different scenarios and more AI-agent friendly (dry-run commands provide feedback without side effects).

  • 9.1 Implement validate command (✅ Completed in 272847e3)
  • 9.2 Implement artifact generation command (✅ Completed in 37cbe240) - Issue #326
    • Command name: render - Generates deployment artifacts without provisioning infrastructure
    • Dual input modes: --env-name (from Created state environment) or --env-file (from config file)
    • Requires --instance-ip parameter for Ansible inventory generation
    • Generates all 8 service artifacts: OpenTofu, Ansible, Docker Compose, Tracker, Prometheus, Grafana, Caddy, Backup
    • Output to user-specified directory via --output-dir <PATH> parameter (prevents conflicts with provision artifacts)
    • No remote operations - purely local artifact generation
    • Use cases: Preview before provisioning, manual deployment workflows, configuration inspection
    • User documentation: docs/user-guide/commands/render.md
    • Manual testing guide: docs/e2e-testing/manual/render-verification.md
    • All templates always rendered (no conditional logic)
    • Specification: docs/issues/326-implement-artifact-generation-command.md

10. Improve usability (UX)

Minor changes to improve the output of some commands and overall user experience.

  • 10.1 Add DNS setup reminder in provision command output ✅ Completed
    • Display reminder when any service has a domain configured
    • Issue: #332
    • Implemented in PR #333
  • 10.2 Improve run command output with service URLs ✅ Completed
    • Show service URLs immediately after services start
    • Include hint about show command for full details
    • Issue: #334
    • Implemented in PR #337
  • 10.3 Add DNS resolution check to test command ✅ Completed
  • 10.4 Add purge command to remove local environment data - Issue #322 ✅ Completed
    • Removes data/{env}/ and build/{env}/ for destroyed environments
    • Allows reusing environment names after destruction
    • Users don't need to know internal storage details
    • Added confirmation prompt with --force flag
    • Added comprehensive user documentation

11. Improve AI agent experience

Add features and documentation that make the use of AI agents to operate the deployer easier, more efficient, more reliable, and less prone to hallucinations.

Context: We assume users will increasingly interact with the deployer indirectly via AI agents (GitHub Copilot, Cursor, etc.) rather than running commands directly. This section ensures AI agents have the best possible experience when working with the deployer.

  • 11.1 Consider using agentskills.io for AI agent capabilities

    • Agent Skills is an open format for extending AI agent capabilities with specialized knowledge and workflows
    • Developed by Anthropic, adopted by Claude Code, OpenAI Codex, Amp, and others
    • Provides progressive disclosure: metadata at startup, instructions on activation, resources on demand
    • Skills can bundle scripts, templates, and reference materials
    • Evaluate compatibility with current AGENTS.md approach
    • See issue: #274
    • See spec: docs/issues/274-consider-using-agentskills-io.md
  • 11.2 Add AI-discoverable documentation headers to template files ✅ Completed

    • Templates generate production config files (docker-compose, tracker.toml, Caddyfile, etc.)
    • Documentation is moving from templates to Rust wrapper types (published on docs.rs)
    • Problem: AI agents in production only see rendered output, not the source repo
    • Solution: Add standardized header to templates with links to repo, wrapper path, and docs.rs
    • Enables AI agents to find documentation even when working with deployed configs
    • See draft: docs/issues/drafts/add-ai-discoverable-documentation-headers-to-templates.md
  • 11.3 Provide configuration examples and questionnaire for AI agent guidance ✅ Completed

    • Problem: AI agents struggle with the many valid configuration combinations
    • Questionnaire template: structured decision tree to gather all required user information
    • Example dataset: real-world scenarios mapping requirements to validated configs
    • Covers: provider selection, database type, tracker protocols, HTTPS, monitoring, etc.
    • Benefits: few-shot learning for agents, reduced hallucination, training/RAG dataset
    • Can integrate with create-environment-config skill from task 11.1
    • See specification: Issue #339, docs/issues/339-provide-config-examples-and-questionnaire-for-ai-agents.md
    • Deliverables: Questionnaire template (494 lines), 15 validated example configs, comprehensive README (469 lines), integration test suite
    • Components: docs/ai-training/questionnaire.md, docs/ai-training/dataset/environment-configs/*.json, docs/ai-training/README.md, tests/validate_ai_training_examples.rs

12. Add JSON output format support ✅ COMPLETED

Epic Issue: #348 - Add JSON output format support

Add machine-readable JSON output format to all commands to improve automation and AI agent integration. Once all commands support JSON output, JSON will become the default format (replacing text) because the deployer is primarily used by AI agents and automation workflows. Use --output-format text for human-friendly terminal output.

Context: JSON output enables programmatic parsing, making it easier for scripts and AI agents to extract specific information (like IP addresses, service URLs, environment names) without parsing human-readable text. We are prioritizing AI UX over human UX because human UX is increasingly the use of AI agents.

Decision: JSON as Future Default Format

Once all commands have JSON output implemented (Phase 2 complete), the default output format will switch to JSON (--output-format json). Users who prefer human-readable output will need to explicitly pass --output-format text. This decision prioritizes AI agent UX since AI agents can more easily process structured JSON than natural language output. The switch cannot happen before all commands support JSON, otherwise the application would panic for commands with no JSON implementation.

Phase 1 - High-Value Commands (Completed):

  • 12.1 Add JSON output to create command ✅ Completed - Issue #349, PR #351

    • Rationale: Contains info about where to find more detailed information (paths, configuration references)
    • Structured output helps automation track environment artifacts
  • 12.2 Add JSON output to provision command ✅ Completed - Issue #352, PR #353

    • Rationale: Contains the provisioned instance IP address - critical for automation workflows
    • Easier to parse and extract IP than regex matching console output
    • Enables SSH automation, DNS updates, and CI/CD pipeline integration
  • 12.3 Add JSON output to show command ✅ Completed - Issue #355, PR #356

    • Rationale: Contains the instance IP and comprehensive environment state
    • Structured format makes it simple to query specific fields programmatically
  • 12.4 Add JSON output to run command ✅ Completed - Issue #357, PR #358

    • Rationale: Contains the list of enabled services and their URLs
    • Allows automation to verify which services are running and how to access them
  • 12.5 Add JSON output to list command ✅ Completed - Issue #359, PR #360

    • Rationale: Shows full environment names without truncation, enabling unambiguous identification
    • Table format truncates long names - JSON provides complete information

Phase 2 - Remaining Commands:

  • 12.6 Add JSON output to configure command ✅ Completed - Issue #371

    • Rationale: Contains the list of installed/configured components (Docker, security updates, firewall) and their status
    • Allows automation to verify successful configuration before proceeding to release
  • 12.7 Add JSON output to release command ✅ Completed - Issue #377

    • Rationale: Contains the list of deployed artifacts and their remote paths on the VM
    • Allows automation to verify all files were correctly transferred before running
  • 12.8 Add JSON output to test command ✅ Completed - Issue #380, PR #383

    • Rationale: Contains test results for each verified component (cloud-init, Docker, Docker Compose)
    • Structured pass/fail per component enables CI/CD pipelines to gate on specific checks
  • 12.9 Add JSON output to destroy command ✅ Completed - Issue #386, PR #387

    • Rationale: Confirms which resources were destroyed and the final environment state
    • Enables automation to verify cleanup before proceeding
  • 12.10 Add JSON output to validate command ✅ Completed - Issue #390, PR #391

    • Rationale: Contains validation results per field with error details
    • Allows automation and AI agents to surface configuration errors programmatically
  • 12.11 Add JSON output to render command ✅ Completed - Issue #392, PR #393

    • Rationale: Contains the list of generated artifact paths and their destinations
    • Enables automation to locate and process the generated files
  • 12.12 Add JSON output to purge command ✅ Completed - Issue #394, PR #395

    • Rationale: Lists the directories and files that were removed
    • Provides a machine-readable record of what was cleaned up
  • 12.13 Add JSON output to register command ✅ Completed - Issue #396, PR #397

    • Rationale: Confirms the registered instance details (IP, SSH port, state transition)
    • Enables automation to verify successful registration before proceeding to configure
  • 12.14 Switch default output format from text to json ✅ Completed - Issue #398, PR #399

    • Changed #[default] in OutputFormat enum from Text to Json
    • Updated default_value = "text" to default_value = "json" in CLI args
    • Updated all doctests referencing the old default
    • Added per-command it_should_produce_json_by_default E2E tests for all commands
    • Rationale: Prioritize AI agent UX — JSON is easier for agents to parse than human-readable text

Deferred Features

Features considered valuable but out of scope for v1. We want to release the first version and wait for user acceptance before investing more time. These can be revisited based on user feedback.

Feature Rationale Notes
MCP (Model Context Protocol) server Native AI integration without shell commands Would let AI agents call deployer as MCP tools directly
Structured error format for AI agents Already improving errors in section 10 Could formalize with error codes, fix suggestions in JSON
Dry-run mode for all commands Hard to implement; alternatives already exist validate and render commands cover preview needs; LXD local environments allow safe testing before deploying to production

Notes

  • This roadmap will be linked to an EPIC issue on GitHub for tracking progress
  • Each major feature should have corresponding documentation in docs/features/ before implementation begins