Goal: use frozen example outputs so that the test suite could catch an otherwise silent 0.01 shift in a log2 ratio.
Add a test that runs the core analytical pipeline (fix → segment → call) on the existing test fixtures and compares .cnr and .cns outputs against frozen example files checked into the repo.
Coordinates and integer fields must match exactly; float fields (log2, weight, etc.) must match within a small tolerance (e.g. 1e-6). The test should fail with a clear diff if any value changes.
Include a documented procedure (e.g. a pytest flag or script) for regenerating the frozen references when a numerical change is intentional. The existing test/Makefile shows how to do this. For example, the sample .cnn and Picard CSV files in the repo are binned but otherwise unanalyzed coverage depths; reference.cnn, sample.cnr and sample.cns files generated from these exercise the critical path of analytical functions, so we'd expect them to change IFF the analytical method is intentionally changed.
Goal: use frozen example outputs so that the test suite could catch an otherwise silent 0.01 shift in a log2 ratio.
Add a test that runs the core analytical pipeline (fix → segment → call) on the existing test fixtures and compares .cnr and .cns outputs against frozen example files checked into the repo.
Coordinates and integer fields must match exactly; float fields (log2, weight, etc.) must match within a small tolerance (e.g. 1e-6). The test should fail with a clear diff if any value changes.
Include a documented procedure (e.g. a pytest flag or script) for regenerating the frozen references when a numerical change is intentional. The existing
test/Makefileshows how to do this. For example, the sample .cnn and Picard CSV files in the repo are binned but otherwise unanalyzed coverage depths; reference.cnn, sample.cnr and sample.cns files generated from these exercise the critical path of analytical functions, so we'd expect them to change IFF the analytical method is intentionally changed.