This guide explains the structured logging implementation in the Torrust Tracker Deployer project, which uses hierarchical structured logging.
The main CLI application (src/main.rs → src/app.rs) initializes logging at startup with user-configurable options. This provides a consistent logging infrastructure for all operations.
The main application uses LoggingBuilder with CLI arguments for configuration:
// src/app.rs
use torrust_tracker_deployer_lib::logging::{LogFormat, LogOutput, LoggingBuilder};
pub fn run() {
let cli = Cli::parse();
// Initialize logging FIRST before any other logic
LoggingBuilder::new(&cli.log_dir)
.with_file_format(cli.log_file_format)
.with_stderr_format(cli.log_stderr_format)
.with_output(cli.log_output)
.init();
// Log startup with context
info!(
app = "torrust-tracker-deployer",
version = env!("CARGO_PKG_VERSION"),
log_dir = %cli.log_dir.display(),
log_file_format = ?cli.log_file_format,
log_stderr_format = ?cli.log_stderr_format,
log_output = ?cli.log_output,
"Application started"
);
// ... application logic ...
info!("Application finished");
}Users can configure logging via CLI arguments with independent format control for file and stderr outputs:
# Default (production): file-only, compact format for files, pretty for stderr
torrust-tracker-deployer
# Development: stderr output, pretty format for both
torrust-tracker-deployer --log-output file-and-stderr
# Custom log directory
torrust-tracker-deployer --log-dir /var/log/deployer
# JSON format for files (log aggregation), pretty for stderr (debugging)
torrust-tracker-deployer --log-file-format json --log-stderr-format pretty --log-output file-and-stderr
# Compact format for both file and stderr
torrust-tracker-deployer --log-file-format compact --log-stderr-format compact --log-output file-and-stderrANSI Code Handling:
- File output: ANSI color codes are automatically disabled for clean, parseable logs
- Stderr output: ANSI color codes are automatically enabled for colored terminal output
This ensures log files can be easily processed with standard text tools (grep, awk, sed) while maintaining colored output for real-time terminal viewing.
See User Guide: Logging for complete user documentation.
When using logging::init_json() or LogFormat::Json, logs are output in JSON format suitable for log aggregation:
{
"timestamp": "2024-09-16T17:00:00.000Z",
"level": "INFO",
"fields": {
"command_type": "provision"
},
"target": "torrust_tracker_deploy::commands::provision",
"span": {
"name": "provision_command"
},
"spans": [{ "name": "provision_command", "command_type": "provision" }]
}
```ns to mirror our three-level architecture.
## Architecture Overview
Our structured logging follows a three-level hierarchy that mirrors the application architecture:
```text
Level 1: Commands (Top-level orchestration)
├── Level 2: Steps (Mid-level execution units)
│ └── Level 3: Remote Actions (Leaf-level operations)use torrust_tracker_deploy::logging;
fn main() {
// Initialize pretty-printed logging for development
logging::init();
// Your application code here...
}use torrust_tracker_deploy::logging;
fn main() {
// For production (JSON output)
logging::init_json();
// For compact output
logging::init_compact();
// Using the format helper
logging::init_with_format(&LogFormat::Json);
}For CLI applications that want to support multiple formats:
use torrust_tracker_deployer_lib::logging::{self, LogFormat, LogOutput, LoggingBuilder};
use clap::Parser;
use std::path::Path;
#[derive(Parser)]
struct Cli {
#[arg(long, default_value = "compact")]
log_file_format: LogFormat,
#[arg(long, default_value = "pretty")]
log_stderr_format: LogFormat,
#[arg(long, default_value = "file-only")]
log_output: LogOutput,
}
fn main() {
let cli = Cli::parse();
// Use LoggingBuilder for independent format control
LoggingBuilder::new(Path::new("./data/logs"))
.with_file_format(cli.log_file_format)
.with_stderr_format(cli.log_stderr_format)
.with_output(cli.log_output)
.init();
// Your application code...
}Backward Compatibility:
If you want to apply the same format to both file and stderr (old behavior), use .with_format():
// Apply same format to both outputs (backward compatible)
LoggingBuilder::new(Path::new("./data/logs"))
.with_format(cli.log_format)
.with_output(cli.log_output)
.init();When you execute operations, you'll see nested spans in your logs:
2024-09-16T17:00:00.000Z TRACE provision_command: Starting infrastructure provisioning
2024-09-16T17:00:00.100Z TRACE provision_command:render_opentofu_templates: Rendering OpenTofu templates
2024-09-16T17:00:00.200Z TRACE provision_command:initialize_infrastructure: Initializing infrastructure
2024-09-16T17:00:00.300Z TRACE provision_command:plan_infrastructure: Planning infrastructure deployment
2024-09-16T17:00:00.400Z TRACE provision_command:apply_infrastructure: Applying infrastructure changes
2024-09-16T17:00:00.500Z TRACE provision_command:get_instance_info: Retrieving instance information
2024-09-16T17:00:00.600Z INFO provision_command: Infrastructure provisioned successfully
2024-09-16T17:01:00.000Z TRACE configure_command: Starting system configuration
2024-09-16T17:01:00.100Z TRACE configure_command:render_ansible_templates: Rendering Ansible templates
2024-09-16T17:01:00.200Z TRACE configure_command:wait_ssh_connectivity: Waiting for SSH connectivity
2024-09-16T17:01:00.300Z TRACE configure_command:wait_cloud_init: Waiting for cloud-init completion
2024-09-16T17:01:00.400Z TRACE configure_command:wait_cloud_init:cloud_init_validation: Validating cloud-init status
2024-09-16T17:01:00.500Z TRACE configure_command:install_docker: Installing Docker
2024-09-16T17:01:00.600Z TRACE configure_command:validate_docker_installation: Validating Docker
2024-09-16T17:01:00.700Z TRACE configure_command:validate_docker_installation:docker_validation: Checking Docker version
2024-09-16T17:01:00.800Z INFO configure_command: System configured successfully
- command_type: The type of command being executed
- Values:
"provision","configure","test"
- Values:
- step_type: The category of step being executed
- Values:
"infrastructure","rendering","connectivity","system","software","validation"
- Values:
- operation: The specific operation being performed
- Examples:
"init","plan","apply","info"
- Examples:
- template_type: For rendering steps
- Values:
"opentofu","ansible"
- Values:
- component: For software/validation steps
- Values:
"docker","docker_compose","cloud_init"
- Values:
- action_type: The type of remote action
- Values:
"validation"
- Values:
- component: The component being acted upon
- Values:
"cloud_init","docker","docker_compose"
- Values:
- server_ip: The target server IP address
The application supports multi-environment deployments (e.g., e2e-full, e2e-config, e2e-provision). The environment field helps identify which environment a log entry belongs to, especially important when debugging multiple environments concurrently.
All commands that operate on environments must include the environment field in their #[instrument] macro:
#[instrument(
name = "provision_command",
skip_all,
fields(
command_type = "provision",
environment = %environment.name() // ✅ Required
)
)]
pub async fn execute(
&self,
environment: Environment<Created>,
) -> Result<Environment<Provisioned>, ProvisionCommandError> {
// Command implementation...
}Commands that require environment field:
ProvisionCommand✅ConfigureCommand✅TestCommand✅DestroyCommand(when implemented)CreateCommand(when implemented - use the name being created)
Commands that do NOT require environment field:
CheckCommand- Generic system checks, no specific environment- Internal tools - Linters, formatters, etc.
Add environment field to important logs in the application layer where environment context is available and adds value:
// Command-level logs
info!(
command = "provision",
environment = %environment.name(), // ✅ Include for visibility
"Starting complete infrastructure provisioning workflow"
);
// Step-level logs with significant operations
info!(
step = "install_docker",
environment = %environment.name(), // ✅ Useful for debugging
"Installing Docker via Ansible"
);Good candidates for environment field:
- Command start/completion messages
- Step-level operations where environment provides context
- Error logs where environment helps identify the issue
- State transition logs
Infrastructure layer components should remain environment-agnostic to maintain proper abstraction:
// Infrastructure adapter - NO environment field
impl TofuClient {
pub fn apply(&self, working_dir: &Path) -> Result<Output> {
info!(
working_dir = %working_dir.display(),
// ❌ NO environment field - adapter is generic
"Applying infrastructure changes"
);
}
}
// SSH client - NO environment field
impl SshClient {
pub fn execute(&self, host: &str, command: &str) -> Result<Output> {
info!(
host = %host,
command = %command,
// ❌ NO environment field - client is generic
"Executing SSH command"
);
}
}Never include environment in:
- External tool adapters (
TofuClient,AnsibleClient,SshClient) - Infrastructure clients and wrappers
- Shared utilities (SSH, file operations, etc.)
- Generic helpers that don't operate on environments
┌─────────────────────────────────────────────────────────┐
│ Application Layer (Environment-Aware) │
│ - Commands: provision, configure, test │
│ - Steps: infrastructure setup, software installation │
│ ✅ Include environment field in spans and key logs │
├─────────────────────────────────────────────────────────┤
│ Domain Layer (Business Logic) │
│ - Environment, State, Repository abstractions │
│ ✅ Include environment field where it makes sense │
├─────────────────────────────────────────────────────────┤
│ Infrastructure Layer (Environment-Agnostic) │
│ - Adapters: TofuClient, AnsibleClient, SshClient │
│ - External tool wrappers │
│ ❌ NEVER include environment field │
└─────────────────────────────────────────────────────────┘
// src/application/commands/provision.rs
#[instrument(
name = "provision_command",
skip_all,
fields(
command_type = "provision",
environment = %environment.name() // ✅ Correct
)
)]
pub async fn execute(
&self,
environment: Environment<Created>,
) -> Result<Environment<Provisioned>, ProvisionCommandError> {
info!(
command = "provision",
environment = %environment.name(), // ✅ Explicit for visibility
"Starting complete infrastructure provisioning workflow"
);
// ...
}// src/infrastructure/external_tools/tofu/adapter/client.rs
impl OpenTofuClient {
pub fn apply(&self, working_dir: &Path, auto_approve: bool) -> Result<Output> {
info!(
working_dir = %working_dir.display(),
auto_approve = %auto_approve,
// ✅ No environment - stays generic
"Applying infrastructure changes"
);
// ...
}
}// src/infrastructure/external_tools/tofu/adapter/client.rs
impl OpenTofuClient {
pub fn apply(
&self,
working_dir: &Path,
environment: &str // ❌ Wrong - breaks abstraction
) -> Result<Output> {
info!(
working_dir = %working_dir.display(),
environment = %environment, // ❌ Wrong - adapter should be generic
"Applying infrastructure changes"
);
// ...
}
}Remember: Logs within command spans automatically inherit environment context. You don't need to add environment field to every log if the span hierarchy provides it:
2025-10-08T09:35:40.731158Z INFO torrust_tracker_deploy::application::steps::software::docker: Installing Docker via Ansible
at src/application/steps/software/docker.rs:62
in torrust_tracker_deploy::application::steps::software::docker::install_docker with step_type: "software", component: "docker"
in torrust_tracker_deploy::application::commands::configure::configure_command with command_type: "configure", environment: e2e-full
When to be explicit:
- High-level command logs (start/completion)
- Error logs where environment is critical for diagnosis
- Logs that might be viewed outside span context (JSON aggregation)
When to rely on span inheritance:
- Nested step logs within command execution
- Infrastructure layer operations (no environment at all)
- Debug/trace logs where span context is sufficient
Command handlers (src/application/command_handlers/) follow consistent logging patterns to ensure observability across the application. All handlers use minimal logging focusing on command lifecycle events (start/completion) rather than step-by-step progress.
All command handlers should log:
- Start of execution: When the command begins
- Completion/Error: When the command finishes successfully or encounters an error
Command handlers should NOT log individual step execution - the step functions themselves handle their own logging through span instrumentation.
All command handler logs must include:
- command: The command name (e.g.,
"provision","configure","destroy","create") - environment: The environment name (using
%environment.name()for Display formatting)
// src/application/command_handlers/provision/handler.rs
#[instrument(
name = "provision_command",
skip_all,
fields(
command_type = "provision",
environment = %environment.name()
)
)]
pub async fn execute(
&self,
environment: Environment<Created>,
) -> Result<Environment<Provisioned>, ProvisionCommandError> {
// ✅ Log at start with structured fields
info!(
command = "provision",
environment = %environment.name(),
"Starting complete infrastructure provisioning workflow"
);
// Execute steps (they handle their own logging)
let result = self.execute_steps(environment).await;
match result {
Ok(provisioned) => {
// ✅ Log successful completion
info!(
command = "provision",
environment = %provisioned.name(),
"Infrastructure provisioning completed successfully"
);
Ok(provisioned)
}
Err(e) => {
// Error logging is handled by error types and propagation
Err(e)
}
}
}// src/application/command_handlers/configure/handler.rs
#[instrument(
name = "configure_command",
skip_all,
fields(
command_type = "configure",
environment = %environment.name()
)
)]
pub async fn execute(
&self,
environment: Environment<Provisioned>,
) -> Result<Environment<Configured>, ConfigureCommandError> {
// ✅ Log at start
info!(
command = "configure",
environment = %environment.name(),
"Starting complete infrastructure configuration workflow"
);
// Execute steps
let result = self.execute_steps(environment).await;
match result {
Ok(configured) => {
// ✅ Log successful completion
info!(
command = "configure",
environment = %configured.name(),
"Infrastructure configuration completed successfully"
);
Ok(configured)
}
Err(e) => Err(e),
}
}// src/application/command_handlers/destroy/handler.rs
#[instrument(
name = "destroy_command",
skip_all,
fields(
command_type = "destroy",
environment = %environment.name()
)
)]
pub async fn execute(
&self,
environment: Environment<AnyState>,
) -> Result<Environment<Destroyed>, DestroyCommandError> {
// ✅ Log at start
info!(
command = "destroy",
environment = %environment.name(),
"Starting complete infrastructure destruction workflow"
);
// Check if already destroyed (special case)
if let EnvironmentState::Destroyed(_) = environment.state() {
info!(
command = "destroy",
environment = %environment.name(),
"Environment is already destroyed, skipping destruction"
);
return Ok(environment.into_destroyed());
}
// Execute steps
let result = self.execute_steps(environment).await;
match result {
Ok(destroyed) => {
// ✅ Log successful completion
info!(
command = "destroy",
environment = %destroyed.name(),
"Infrastructure destruction completed successfully"
);
Ok(destroyed)
}
Err(e) => Err(e),
}
}// src/application/command_handlers/create/handler.rs
#[instrument(
name = "create_command",
skip_all,
fields(
command_type = "create",
environment = %name
)
)]
pub async fn execute(
&self,
name: EnvironmentName,
config: Config,
) -> Result<Environment<Created>, CreateCommandError> {
// ✅ Log at start
info!(
command = "create",
environment = %name,
"Starting environment creation workflow"
);
// Execute steps
let result = self.execute_steps(name, config).await;
match result {
Ok(created) => {
// ✅ Log successful completion
info!(
command = "create",
environment = %created.name(),
"Environment created successfully"
);
Ok(created)
}
Err(e) => Err(e),
}
}- Minimal Logging: Only log command lifecycle (start/completion), not individual steps
- Consistent Fields: Always use
commandandenvironmentfields with the same naming - Environment Field Format: Use
%environment.name()for Display formatting (notenvironment_name) - Rely on Span Hierarchy: Step-level logging is handled by step functions within their own spans
- Let Errors Propagate: Error details are logged by error types and context builders, not in handlers
- Match Block Pattern: Use explicit
matchwith logging in success arm, not.map()or.and_then()
// Don't log individual steps - they have their own logging
pub async fn execute(&self, environment: Environment<Created>) -> Result<...> {
info!("Starting provision");
// ❌ Don't log each step
info!("Converting configuration");
let config = self.convert_config();
info!("Checking uniqueness");
self.check_uniqueness();
info!("Creating entity");
let entity = self.create_entity();
info!("Persisting");
self.persist(entity);
info!("Provision complete");
}// Don't use environment_name - use environment
info!(
command = "create",
environment_name = %name, // ❌ Wrong field name
"Starting environment creation workflow"
);// Always include command and environment fields
info!("Starting infrastructure provisioning"); // ❌ Missing structured context// Within a command span, don't repeat environment in every log
pub async fn execute(&self, environment: Environment<Created>) -> Result<...> {
info!(command = "provision", environment = %environment.name(), "Starting");
// ❌ Redundant - span already has environment context
debug!(environment = %environment.name(), "Step 1");
debug!(environment = %environment.name(), "Step 2");
debug!(environment = %environment.name(), "Step 3");
}- Observability: Start/completion logs provide clear boundaries for command execution
- Consistency: All handlers follow the same pattern, making logs predictable
- Low Noise: Minimal logging reduces clutter while maintaining visibility
- Separation of Concerns: Step functions handle their own logging, handlers orchestrate
- Span Hierarchy: Tracing's span system provides context without redundant logging
Control logging behavior with environment variables:
# Show all trace-level logs for development
export RUST_LOG=torrust_tracker_deploy=trace
# Production logging (info and above)
export RUST_LOG=torrust_tracker_deploy=info
# Only errors and warnings
export RUST_LOG=torrust_tracker_deploy=warn
# Detailed logging for specific modules
export RUST_LOG=torrust_tracker_deploy::commands=trace,torrust_tracker_deploy::steps=debugWhen using logging_simple::init_json(), logs are output in JSON format suitable for log aggregation:
{
"timestamp": "2024-09-16T17:00:00.000Z",
"level": "INFO",
"fields": {
"command_type": "provision"
},
"target": "torrust_tracker_deploy::commands::provision",
"span": {
"name": "provision_command"
},
"spans": [{ "name": "provision_command", "command_type": "provision" }]
}- TRACE: Detailed flow information (span entry/exit)
- DEBUG: Detailed information for debugging
- INFO: General information about application progress
- WARN: Warning messages about potential issues
- ERROR: Error messages about failures
The hierarchical spans automatically provide context. You don't need to repeat information that's already captured in the span fields.
// Good: Span fields provide context
#[instrument(name = "docker_validation", fields(component = "docker"))]
pub async fn validate_docker() {
info!("Starting validation"); // Context is implicit from span
}
// Avoid: Redundant context information
#[instrument(name = "docker_validation", fields(component = "docker"))]
pub async fn validate_docker() {
info!(component = "docker", "Starting Docker validation"); // Redundant
}Use environment-specific filtering to reduce noise in production:
# Development: Show everything
export RUST_LOG=trace
# Production: Focus on important events
export RUST_LOG=torrust_tracker_deploy=info,warn,errorTo see the full span hierarchy, use trace level logging:
RUST_LOG=torrust_tracker_deploy=trace cargo run --bin your-binaryFilter logs for specific operations:
# Only infrastructure operations
RUST_LOG=torrust_tracker_deploy=info cargo run | grep infrastructure
# Only validation operations
RUST_LOG=torrust_tracker_deploy=trace cargo run | grep validationThe span timings help identify slow operations:
2024-09-16T17:00:00.000Z TRACE provision_command: entered
2024-09-16T17:00:05.234Z TRACE provision_command: exited // 5.234 seconds total