Conversation
|
@jamesturner246 @Aurashk @miruuna this is the PR to clarify what tests should be performed for PR merges. This will update the PR template so the testing requiremenst are clearer, this is sometthing that I would value feedback on. The tests that should be performed are:
Looking forward to feedback from you |
Thanks Pawel, that's very helpful. |
|
There are no hardware restrictions, but this code has been written to run on AlmaLinux9. If I remember correctly, @jamesturner246 got CVMFS set up on his laptop, and should be able to run these tests locally |
|
This looks good and it'll help standardize the workflow. However, running |
Thanks Miruna. Yes, it is expected that this test takes very long, but this should be only have to be run when major changes are made to the core codebase, prior to merging a PR. I will make this clearer in the testing list requirements |
|
In case it is useful, I will contribute a couple of notes regarding where we can run the
|
Thanks Kurt, that's very helpful, it's possible 2. explains some test failures I was seeing locally last time I tried. Is it totally unpredictable what will happen in the tests without these checks for computational resources or is there a common point of failure? Also one other more general thing that might be useful to know is what makes the tests run long? Is it a timed simulation of everything working meaninfully together or is it doing a lot of computational work? |
|
Also another thing came to mind. What's the situation with this MSQT test in the CI https://github.com/DUNE-DAQ/drunc/actions/workflows/run_mqst.yml? It seems like it was abandoned some time earlier in the year judging by the actions runs. Is this something we want to get working again? |
When a session is running on a host with insufficient resources, a session will likley throw errors with the number of missing/empty data products. This takes time as we run a variety of configurations with many runs - there are 9 tests and multiple configurations for some of these tests. Supposing each tests takes 3 mins, this will get you the approximate half hour for running. |
This is something held back by the development of the Subprocess process manager, in this PR |
|
In principle, the time that each regression test takes to run is dominated by the amount of time spent waiting in each FSM state (e.g. trigger-enabled), as much time as the writer of the test chose. Of course, if lots of failures happen and/or a process either stalls or crashes, run control transitions can take longer than usual (e.g. some of the "stop-run" transitions), and those might produce a noticeable extra amount of time. |
|
Hi all. As discussed last meeting, I think a special cluster account just for testing PRs would be invaluable for this workflow. Something we could perhaps hook into CI -- e.g. manually (or even auto, but maybe too noisy) trigger the full integration test suite on the cluster once the PR is marked ready for review. |
|
@jamesturner246 @Aurashk @miruuna @bieryAtFnal |
|
Hi @PawelPlesniak , the updated template looks reasonable to me. I've made a note to myself to revisit the template once the global bundle script is generally available. When I do that, I will update the template to reference the new script ( |
|
Looking nice and very useful, thanks @PawelPlesniak. I have a couple of small suggestions.
|
| _Please include a summary of the change and which issue is fixed (if any). Please also | ||
| include relevant motivation and context. List any dependencies that are required for | ||
| this change._ | ||
| Addresses issue # _Fill this in with the relevant issue number so the relevant issue can be closed._ |
There was a problem hiding this comment.
| Addresses issue # _Fill this in with the relevant issue number so the relevant issue can be closed._ | |
| Fixes issue # _Fill this in with the relevant issue number so the relevant issue can be closed._ |
I would suggest changing Addresses to Fixes, as I don't think 'Addresses' will close the issue. See:
https://docs.github.com/en/issues/tracking-your-work-with-issues/using-issues/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword
| # Reviewer checklist | ||
| _Note - if a reveiwer requests changes and those changes are implemented, this block should be re-checked._ | ||
|
|
||
| ## Further checks | ||
| - [ ] Pre-commit hooks run successfully if applicable (e.g. `pre-commit run --all-files`) | ||
| - [ ] Unit tests pass (`pytest`) - note please use the broadest marker possible | ||
| - [ ] Suggested manual tests pass as described above | ||
| - [ ] Integration tests pass (`daqsystemtest_integtest_bundle.sh`) |
There was a problem hiding this comment.
This block here (except the manual tests) seems best placed in a CI workflow rather than a reviewer's manual actions. As we've discussed in the meeting, we should look into this (quite fond of @jamesturner246's suggestion about getting a cluster account to test these PRs).
It should be fine for this PR but something to revisit later on
| - [ ] Code is commented, particularly in hard-to-understand areas | ||
| - [ ] Tests added or an issue has been opened to tackle that in the future. | ||
| (Indicate issue here: # (issue)) No newline at end of file | ||
| Once the features are validated and both the unit and integrationm tests pass, the PRs can be merged. No newline at end of file |
There was a problem hiding this comment.
Something that might be useful to make explicit for this repo is who has the responsibility to merge things in develop after it passes review. Is it the author ( / = assignee), or the reviewer?
(noticed in several repos in LHCb / DUNE that this responsibility changes, so would be good to know here)
Description
Changes the structure of the PR template to prioritize the testing, introduces the requirements from other repos as a field.
No tests or further checks have been run, as this is a template issue, and does not affect the core code.
Type of change
Key checklist
python -m pytest)pre-commit run --all-files)Further checks
(Indicate issue here: # (issue))