-
Notifications
You must be signed in to change notification settings - Fork 19.7k
[Do not review] Increase timeout for JAX GPU tests to 90 minutes #21909
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Remove complex JAX abstract pytree logic that was causing 'free(): invalid pointer' errors - Use preservation mode for all backends to avoid state structure mismatches - This prevents memory corruption when loading checkpoints with different optimizer states
- Replace bare 'except:' with specific 'except (ImportError, AttributeError):' for distribution import patterns - This improves error handling by only catching expected exceptions
- Extract duplicated tensor conversion logic into _to_numpy() helper method - Replace duplicated code blocks in optimizer and metrics variable comparisons - Improves maintainability and reduces code duplication
Summary of ChangesHello @abheesht17, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Keras's JAX backend capabilities by addressing test stability and improving distributed training infrastructure. It increases JAX GPU test timeouts to accommodate longer runs and introduces comprehensive multi-host checkpointing support via Orbax. A new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request increases the timeout for JAX GPU tests and adds multi-host checkpointing support for JAX using Orbax, along with a new model.load() method for Orbax checkpoints. Although the title says 'Do not review', I've reviewed the changes and they are generally well-implemented and include comprehensive tests. I have a few minor suggestions to improve code clarity and efficiency.
| # Set timeout to 60 mins from default 180 mins | ||
| timeout_mins: 60 | ||
| timeout_mins: 90 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # Set timeout to 60 mins from default 180 mins | ||
| timeout_mins: 60 | ||
| timeout_mins: 90 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| zero_grads = [ | ||
| backend.convert_to_tensor(np.zeros_like(v.numpy())) | ||
| for v in self.trainable_variables | ||
| ] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation for creating zero gradients involves converting each variable to a NumPy array, creating a zero-filled NumPy array, and then converting it back to a backend tensor. This is inefficient as it involves unnecessary data transfers between devices (e.g., GPU) and the host CPU. You can achieve the same result more efficiently and concisely by using backend.zeros_like() directly on the variable.
zero_grads = [backend.zeros_like(v) for v in self.trainable_variables]
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #21909 +/- ##
==========================================
- Coverage 76.30% 76.30% -0.01%
==========================================
Files 580 580
Lines 60029 60112 +83
Branches 9432 9450 +18
==========================================
+ Hits 45803 45866 +63
- Misses 11750 11758 +8
- Partials 2476 2488 +12
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
This was an experimental PR. Closing this now. |
The JAX GPU tests time out in this PR: #21903.