Skip to content

Conversation

@llbbl
Copy link

@llbbl llbbl commented Jun 20, 2025

Set up Python Testing Infrastructure

Summary

This PR establishes a comprehensive testing infrastructure for the SeFa (Closed-Form Factorization of Latent Semantics in GANs) Python project. The setup provides a solid foundation for writing and maintaining tests, with Poetry for dependency management, pytest for testing, and coverage reporting.

Changes Made

Package Management

  • Poetry Setup: Created pyproject.toml with Poetry configuration as the package manager
  • Dependencies: Migrated core dependencies (torch, numpy, streamlit, tqdm) to Poetry
  • Testing Dependencies: Added pytest, pytest-cov, and pytest-mock as development dependencies

Testing Infrastructure

  • Directory Structure:
    tests/
    ├── __init__.py
    ├── conftest.py          # Shared fixtures and configuration
    ├── test_setup_validation.py  # Validation tests
    ├── unit/
    │   └── __init__.py
    └── integration/
        └── __init__.py
    

Configuration

  • pytest Configuration in pyproject.toml:

    • Test discovery patterns for test_*.py and *_test.py files
    • Coverage reporting with HTML and XML outputs
    • Custom markers: @pytest.mark.unit, @pytest.mark.integration, @pytest.mark.slow
    • Strict mode with verbose output and short traceback format
  • Coverage Settings:

    • Source directories: sefa, utils, interface
    • Excluded: test files, init.py, SessionState.py
    • Report formats: terminal, HTML, XML
    • Currently set to 0% threshold (to be increased as tests are added)

Test Fixtures (conftest.py)

  • temp_dir: Temporary directory for test files
  • mock_model_config: Mock configuration for model testing
  • sample_latent_codes: Sample PyTorch tensors for latent code testing
  • sample_numpy_array: Sample numpy arrays for data testing
  • mock_generator_output: Mock generator model outputs
  • device: PyTorch device fixture (CPU by default)
  • test_data_dir: Temporary data directory with test files
  • reset_torch_seed: Auto-fixture to ensure reproducible tests
  • mock_streamlit_session: Mock Streamlit session state
  • capture_stdout: Stdout capture for testing print statements
  • Custom markers for GPU tests and model-dependent tests

Other Updates

  • Updated .gitignore with:
    • Testing artifacts: .pytest_cache/, .coverage, htmlcov/, coverage.xml
    • Claude settings: .claude/*
    • Virtual environments and IDE files
    • Note: poetry.lock is intentionally NOT ignored

Running Tests

After pulling this branch, install dependencies and run tests:

# Install Poetry if not already installed
curl -sSL https://install.python-poetry.org | python3 -

# Install project dependencies
poetry install

# Run tests (both commands work)
poetry run test
poetry run tests

# Run specific test categories
poetry run pytest -m unit        # Run only unit tests
poetry run pytest -m integration # Run only integration tests
poetry run pytest -m "not slow"  # Skip slow tests

Validation

The setup includes validation tests (test_setup_validation.py) that verify:

  • All dependencies are properly installed
  • The testing infrastructure is correctly configured
  • All fixtures work as expected
  • Directory structure is properly created
  • Coverage reporting is functional

All 20 validation tests pass successfully.

Next Steps

With this infrastructure in place, developers can now:

  1. Write unit tests for individual model components in tests/unit/
  2. Create integration tests for end-to-end workflows in tests/integration/
  3. Add more specialized fixtures as needed
  4. Gradually increase the coverage threshold as more tests are added

Notes

  • The project uses Poetry for modern Python dependency management
  • Coverage is configured but currently excludes model files from the report (can be adjusted later)
  • The testing setup follows pytest best practices with clear separation of unit and integration tests
  • All standard pytest features and plugins are available for use

- Added Poetry configuration with testing dependencies (pytest, pytest-cov, pytest-mock)
- Created test directory structure with unit/integration subdirectories
- Configured pytest with coverage reporting, custom markers, and test discovery
- Added comprehensive test fixtures in conftest.py for common testing scenarios
- Created validation tests to ensure testing infrastructure works correctly
- Updated .gitignore with testing artifacts and Claude settings
- Configured test commands accessible via `poetry run test` or `poetry run tests`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant