Skip to content

Fix intermittent crash with multiprocessing-aware logging#89

Merged
JoeZiminski merged 15 commits intoneuroinformatics-unit:mainfrom
AlgoFoe:fix-mp
Jan 16, 2026
Merged

Fix intermittent crash with multiprocessing-aware logging#89
JoeZiminski merged 15 commits intoneuroinformatics-unit:mainfrom
AlgoFoe:fix-mp

Conversation

@AlgoFoe
Copy link
Contributor

@AlgoFoe AlgoFoe commented Jan 4, 2026

Description

What is this PR

  • Bug fix
  • Addition of a new feature
  • Other

Why is this PR needed?

  • On Windows, setting multiprocessing_aware=True can intermittently trigger shutdown errors when multiprocessing_logging is installed bcoz background logging threads may access closed pipe handles even in single-process exec.

What does this PR do?

  • It avoids installing the multiprocessing_logging handler in the main process and only enables it when running inside child processes.

References

How has this PR been tested?

  • Tested locally on Windows by running start_logging() with multiprocessing_aware=True both with and without spawning child processes.

Is this a breaking change?

  • No.

Does this PR require an update to the documentation?

  • No.

Checklist:

  • The code has been tested locally
  • Tests have been added to cover all new functionality
  • The documentation has been updated to reflect any changes
  • The code has been formatted with pre-commit

@adamltyson adamltyson requested a review from JoeZiminski January 8, 2026 16:59
@JoeZiminski
Copy link
Member

Thanks for this @AlgoFoe! Could you please send the code you used to test locally? I'll have a quick playaround!

If I understand correctly this will skip setting up multiprocessing in the MainProcess such that when multiprocessing=True but only a single process is used, we won't get the errors from #78. However, when multiprocessing is actually used, I believe this should still be run on MainProcess because the logger on the main process should also be MultiProcessingHandler so that and child processes inherit it and all logs can be added to the same internal queue? It may be that the deeper problem is that multilogging_process does not work on Windows because that uses spawn, and the child process does not inherit the logger in the same way as fork, so all customisation is stripped. If this is correct, maybe it is safer just to warn/error if multiprocess_logging=True and we are on Windows?

@AlgoFoe
Copy link
Contributor Author

AlgoFoe commented Jan 12, 2026

Thanks for this @AlgoFoe! Could you please send the code you used to test locally? I'll have a quick playaround!

If I understand correctly this will skip setting up multiprocessing in the MainProcess such that when multiprocessing=True but only a single process is used, we won't get the errors from #78. However, when multiprocessing is actually used, I believe this should still be run on MainProcess because the logger on the main process should also be MultiProcessingHandler so that and child processes inherit it and all logs can be added to the same internal queue? It may be that the deeper problem is that multilogging_process does not work on Windows because that uses spawn, and the child process does not inherit the logger in the same way as fork, so all customisation is stripped. If this is correct, maybe it is safer just to warn/error if multiprocess_logging=True and we are on Windows?

Thanks for the review @JoeZiminski! You are completely right. I tested it locally on Windows, and with my proposed fix, the child process logs are silently missed (like they never appear in the file or console) because the MainProcess listener isn't running.

Since multiprocessing-logging has known issues with the spawn start method used by Windows, I agree that the safest approach is to detect the OS and disable the feature rather than letting it fail silently.

I will update the PR accordingly.
Here is the reproduction script I used to confirm the silent failure:

import time
import fancylog
from fancylog import start_logging
import logging
from pathlib import Path

def child_process_task():
    logging.info("Child process in")
    time.sleep(1)
    logging.info("Child process out")

if __name__ == "__main__":
    output_dir = Path("test_logs")
    output_dir.mkdir(exist_ok=True)

    print("Initializing logger...")
    start_logging(
        output_dir=output_dir,
        package=fancylog,
        multiprocessing_aware=True,
        verbose=True
    )

    logging.info("Main process started.")

    p = multiprocessing.Process(target=child_process_task)
    p.start()
    p.join()

    logging.info("Main process finished.")

@codecov
Copy link

codecov bot commented Jan 12, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 77.72%. Comparing base (51ee788) to head (500d64f).
⚠️ Report is 3 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #89      +/-   ##
==========================================
+ Coverage   76.81%   77.72%   +0.91%     
==========================================
  Files           3        3              
  Lines         207      211       +4     
==========================================
+ Hits          159      164       +5     
+ Misses         48       47       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@AlgoFoe
Copy link
Contributor Author

AlgoFoe commented Jan 13, 2026

Hey @JoeZiminski, pushed an update adding a user warning and fixing the validation order.

@JoeZiminski
Copy link
Member

Thanks @AlgoFoe! I think that looks good to me, nice test! My only thought is it might be better to error out and let the user change multiprocessing_aware=False so we can be sure they know what's going on, rather than warn and change ourselves. It is quite common to miss warnings and so the user may not realise the setting has being turned off, causing confusion later. I'll let @adamltyson make the final call, if he agrees it would just necessitate changing the warning to an error and use pytest.raises in the test.

@adamltyson
Copy link
Member

I'm not sure about throwing an error. I think we (and others) use fancylog as a dependency, and have multiprocessing_aware set to True. Turning this into an error would mean we need to go and add some logic to deal with this on Windows within multiple tools.

Copy link
Member

@JoeZiminski JoeZiminski left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure that makes sense, thanks @AlgoFoe!

@AlgoFoe
Copy link
Contributor Author

AlgoFoe commented Jan 16, 2026

Hi @JoeZiminski! Quick ping in case this slipped through, happy to make any further changes if needed. Thanks!

@JoeZiminski
Copy link
Member

Sorry @AlgoFoe LGTM! thanks again for the contribution

@JoeZiminski JoeZiminski merged commit c2deb8f into neuroinformatics-unit:main Jan 16, 2026
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Crash with multiprocessing_aware

3 participants