Skip to content

fix compute/communication overlap for gloo #240

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

tushar00jain
Copy link
Contributor

@tushar00jain tushar00jain commented Jul 22, 2025

Summary:

  • we current wait for pg work's future when preparing for a fragment
  • if we use gloo, this blocks the cpu
  • move the wait call to when we perform the actual sync of the fragment
  • since we still call work.wait() in the allreduce call itself this doesn't completely fix the problem

Stack created with Sapling. Best reviewed with ReviewStack.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jul 22, 2025
@tushar00jain tushar00jain marked this pull request as draft July 22, 2025 22:11
@tushar00jain tushar00jain force-pushed the pr240 branch 4 times, most recently from dbec11e to 5228ee8 Compare July 24, 2025 21:53
@tushar00jain tushar00jain changed the title use block_current_stream work api wait for futures while syncing fragments Jul 24, 2025
@tushar00jain tushar00jain changed the title wait for futures while syncing fragments use block_current_stream work api Jul 24, 2025
@tushar00jain tushar00jain changed the title use block_current_stream work api wait for futures while syncing fragments Jul 24, 2025
@tushar00jain tushar00jain force-pushed the pr240 branch 14 times, most recently from c93ad11 to bfb92ff Compare July 25, 2025 21:20
@tushar00jain tushar00jain marked this pull request as ready for review July 25, 2025 21:21
@tushar00jain tushar00jain requested a review from d4l3k July 25, 2025 21:21
@@ -411,7 +412,7 @@ def callback(
fut = fut.then(callback)

fut = self.wrap_future(fut, tensor)
return fut
return (work, fut)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this makes me a bit nervous since calling only work.wait() means that the code in the future callback may have not run -- i.e tensor /= num_participants may execute out of order

The advantage of future objects here is that the work runs on the completing thread, where as .wait() runs on the waiting thread. I'm wondering if we should actually wrap these futures back into Work objects so we can do the stream dependency correctly in .wait()

Summary:
use http transport instead of pg transport -- pg transport fails to resolve address when running locally
@tushar00jain tushar00jain changed the title wait for futures while syncing fragments fix compute/communication overlap for gloo Jul 26, 2025
@tushar00jain tushar00jain force-pushed the pr240 branch 3 times, most recently from 4964b72 to 15b0cb0 Compare July 26, 2025 02:48
@tushar00jain tushar00jain force-pushed the pr240 branch 3 times, most recently from 405dc6e to 91207a2 Compare July 26, 2025 17:48
Summary:
deep copy the state dict for sending checkpoint because if the replica moves to the next step, the state dict can change before the checkpoint is sent
Summary:
- call future.wait in callbacks to make sure the continuation executes after the future has completed
- set the stream correctly to execute callback scheduled by bucketized allreduce
Summary:
returns the work object so we can be more flexible with the usage
Summary:
- we current wait for pg work's future when preparing for a fragment
- if we use gloo, this blocks the cpu
- move the wait call to when we perform the actual sync of the fragment
- since we still call `work.wait()` in the allreduce call itself this doesn't completely fix the problem
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants