Skip to content

Conversation

@engmohamedsalah
Copy link
Contributor

Summary

Fixes #42890

SAM-HQ integration tests were failing due to non-deterministic positional embeddings. The positional_embedding parameter in SamHQPositionalEmbedding is:

  1. Randomly initialized using torch.randn() (line 418 in modeling_sam_hq.py)
  2. Excluded from checkpoint loading via _keys_to_ignore_on_load_missing (line 1236)

This meant every test run generated different random values, causing flaky test failures.

Solution

Added set_seed(0) in the setUp() method of SamHQModelIntegrationTest class to ensure deterministic random initialization across all integration tests.

This approach:

  • KISS: Single line in setUp() method
  • DRY: One fix applies to all 14 integration test methods
  • Clean: Follows existing transformers testing patterns
  • Minimal: Only 8 lines added (including comments)

Changes

  • Added from transformers.trainer_utils import set_seed import
  • Added setUp() method with set_seed(0) to SamHQModelIntegrationTest class
  • Added clear comments explaining why seed is needed

Test Plan

  • Python syntax check passed
  • Integration tests will be run by CI

SAM-HQ's positional_embedding is randomly initialized with torch.randn()
and excluded from checkpoint loading via _keys_to_ignore_on_load_missing.
This causes non-deterministic test results. Fixed by setting seed in
setUp() to ensure reproducible integration tests.

Fixes huggingface#42890
@github-actions
Copy link
Contributor

[For maintainers] Suggested jobs to run (before merge)

run-slow: sam_hq

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

tests/models/sam_hq/test_modeling_sam_hq.py::SamHQModelIntegrationTest may fail since a lot of cases are lack of set_seed()

1 participant