Skip to content

Conversation

enourbakhsh
Copy link
Contributor

No description provided.

@enourbakhsh enourbakhsh force-pushed the tickets/DM-48936 branch 9 times, most recently from fc29f51 to beddd77 Compare July 19, 2025 06:28
Comment on lines +316 to +321
expected1 = np.array([-0.11130552, -0.46470672, -0.90978149, -0.55470151])
expected2 = np.array([0.91131311, -2.30220238, -1.22224883, 0.69578745])
try:
self.assertTrue(np.allclose(table["base_PsfFlux_apCorr"], expected1))
except AssertionError:
self.assertTrue(np.allclose(table["base_PsfFlux_apCorr"], expected2))
Copy link
Contributor Author

@enourbakhsh enourbakhsh Jul 19, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It’s a mystery to me why we get two different outcomes for aperture correction in this specific test; it always alternates between just those two. I ran a quick test and found that there’s some randomness beyond the seeds we set in _makeTestData. I have to say, the image arrays themselves seem to be the same. If I save a test exposure and reuse it, the problem doesn't happen, i.e., I get the same base_PsfFlux_apCorr every time I run the unit test. Any idea why we don't get a unique value here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My best guess would be that there's a seeded random generator being called N times in the first case and N+1 in the second, though I'm not sure what would cause that. This should be a follow-up ticket.

Copy link
Contributor

@taranu taranu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. See comments/suggestions. If you don't figure out what's causing the one alternating result, please file another ticket for it.

# the driver, see the next test.)
driver = SingleBandMeasurementDriverTask()

# Access the configuration through the driver instance to modify it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aren't a lot of these settings defaults? Do you want these specific values tested rather than whatever the defaults are (which might change in the feature)? I can see the value in having one test that makes minimal changes to the defaults and another that tries to change as many as possible for maximal coverage.

bgValues = result.catalog.get(self.localBgAlgName + "_instFlux")
bgStdevs = result.catalog.get(self.localBgAlgName + "_instFluxErr")
for bgValue, bgStdev in zip(bgValues, bgStdevs):
self.assertFloatsAlmostEqual(bgValue, 0.0, atol=3.0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this in nJy or the pre-calibrated flux which is ~0.02 nJy? If it's nJy then 3.0 seems quite large for atol.

)
)

# Re-run the driver with deblending disabled.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some of these should probably just be separate tests so you can run them in parallel, if desired. I suppose it's useful to run at least a pair serially to ensure there isn't some unintended state saved between runs, but not every time.

result = driver.run(mExposure=mExposure, mDeconvolved=mDeconvolved, bands=bands, refBand="i")

# Run again without mDeconvolved to check that the driver generates
# its own deconvolved exposures internally.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it persist them or just discard them after generating? If the former, should they be checked?


# Some key assertions to verify the result.
self.assertIsInstance(result, pipeBase.Struct, "Expected a Struct result")
self.assertTrue(hasattr(result, "catalogs"), "Expected 'catalog' in result")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to just loop over a list of expected output names, but I suppose they're only specified in the docs.

self.assertIsInstance(result.catalogs[band], afwTable.SourceCatalog, "Incorrect catalog type")
cat = result.catalogs[band]
self.assertEqual(cat.schema, driver.schema, "Catalog schema mismatch")
self.assertEqual(len(cat), 6, "Expected 6 sources in catalog")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be helpful to define some of these expected values in setUp for consistency, and to make it easier to look up what they are.

"base_PsfFlux_apCorr",
"slot_PsfFlux_apCorrErr",
]
missing = [col for col in some_expected_columns if col not in table.colnames]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you don't care about the ordering you could make some_expected_columns a set and write missing = (some_expected_columns & set(table.colnames)) - some_expected_columns, though it doesn't really matter.

Comment on lines +316 to +321
expected1 = np.array([-0.11130552, -0.46470672, -0.90978149, -0.55470151])
expected2 = np.array([0.91131311, -2.30220238, -1.22224883, 0.69578745])
try:
self.assertTrue(np.allclose(table["base_PsfFlux_apCorr"], expected1))
except AssertionError:
self.assertTrue(np.allclose(table["base_PsfFlux_apCorr"], expected2))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My best guess would be that there's a seeded random generator being called N times in the first case and N+1 in the second, though I'm not sure what would cause that. This should be a follow-up ticket.

# Store datasets per band.
self.datasets = {}

rng = np.random.RandomState(565)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a legacy function and shouldn't be used. Use e.g. numpy.random.default_rng(seed=565) instead.

I wonder if this is the cause of your semi-random result above? It shouldn't be, but two different results suggests a test order dependency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants