This guide explains how to extend and modify E2E tests when adding new features or making changes.
For OpenTofu, LXD, or cloud-init modifications:
- Update infrastructure lifecycle tests in
src/bin/e2e_infrastructure_lifecycle_tests.rs - Add validation methods for new infrastructure components
- Test locally:
cargo run --bin e2e-infrastructure-lifecycle-tests - Verify CI passes on
.github/workflows/test-e2e-infrastructure.yml
// In e2e_infrastructure_lifecycle_tests.rs
async fn validate_new_cloud_init_feature(
ssh_client: &SshClient,
) -> Result<(), Box<dyn std::error::Error>> {
// Add your validation logic
let output = ssh_client.execute("check-new-feature")?;
assert!(output.contains("expected-result"));
Ok(())
}For Ansible playbooks or software installation modifications:
- Update deployment workflow tests in
src/bin/e2e_deployment_workflow_tests.rs - Add validation methods for new software components
- Update Docker image in
docker/provisioned-instance/if needed - Test locally:
cargo run --bin e2e-deployment-workflow-tests - Verify CI passes on
.github/workflows/test-e2e-deployment.yml
// In e2e_deployment_workflow_tests.rs
async fn validate_new_software(
ssh_client: &SshClient,
) -> Result<(), Box<dyn std::error::Error>> {
// Validate software is installed
let version_output = ssh_client.execute("new-software --version")?;
assert!(version_output.contains("v1.2.3"));
// Validate software is configured correctly
let config_output = ssh_client.execute("cat /etc/new-software/config")?;
assert!(config_output.contains("expected-config"));
Ok(())
}For comprehensive changes affecting multiple components:
- Test with complete workflow suite:
cargo run --bin e2e-complete-workflow-tests - Verify both infrastructure and deployment suites pass independently
- Update documentation to reflect changes
- Consider split approach: Can the change be tested in isolated suites?
When adding or modifying E2E tests, follow these principles:
- Focus: Infrastructure readiness and basic VM setup
- Network Dependencies: Minimize network-heavy operations inside VM
- Validation: Verify infrastructure state, not application behavior
- Cleanup: Always ensure proper resource cleanup
- Focus: Software functionality and deployment workflow
- Network Access: Reliable network access via Docker containers
- Validation: Verify application installation, configuration, and operation
- State: Sequential commands build on previous state
- Focus: Comprehensive validation for development workflows
- Environment: Local only (not CI-compatible)
- Use Cases: Integration testing, debugging complex issues
- Coverage: Full end-to-end deployment pipeline
- Each suite should be runnable independently
- No shared state between test suites
- Each test should clean up after itself
- Tests should not depend on specific execution order
When adding new E2E tests or modifying existing ones:
-
Update relevant documentation files:
- test-suites.md - If adding new test suites or changing validation
- running-tests.md - If adding new prerequisites or commands
- troubleshooting.md - If introducing new common issues
- architecture.md - If changing testing architecture
- README.md - If changing quick start or overview
-
Update cross-references to related documentation
-
Add examples for new features or complex changes
For general contribution guidelines:
- Contributing Guide - General contribution guidelines
- Testing Conventions - Unit testing standards
- Error Handling - Error handling patterns
- Logging Guide - Logging best practices
Before submitting changes to E2E tests:
- All relevant test suites pass locally
- CI tests pass on GitHub Actions
- Documentation is updated
- Code follows project conventions
- Commit messages follow conventional commits
- Pre-commit checks pass (
./scripts/pre-commit.sh)