Skip to content

Conversation

@abheesht17
Copy link
Collaborator

The JAX GPU tests time out in this PR: #21903.

amitsrivastava78 and others added 8 commits December 8, 2025 10:51
- Remove complex JAX abstract pytree logic that was causing 'free(): invalid pointer' errors
- Use preservation mode for all backends to avoid state structure mismatches
- This prevents memory corruption when loading checkpoints with different optimizer states
- Replace bare 'except:' with specific 'except (ImportError, AttributeError):'
  for distribution import patterns
- This improves error handling by only catching expected exceptions
- Extract duplicated tensor conversion logic into _to_numpy() helper method
- Replace duplicated code blocks in optimizer and metrics variable comparisons
- Improves maintainability and reduces code duplication
@abheesht17 abheesht17 changed the title Increase timeout for JAX GPU tests to 90 minutes [Do not review] Increase timeout for JAX GPU tests to 90 minutes Dec 9, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @abheesht17, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Keras's JAX backend capabilities by addressing test stability and improving distributed training infrastructure. It increases JAX GPU test timeouts to accommodate longer runs and introduces comprehensive multi-host checkpointing support via Orbax. A new Model.load() method simplifies loading full model states from Orbax checkpoints, with automatic handling of distributed data resharding. These changes are backed by thorough testing to ensure reliability and correctness in distributed JAX environments.

Highlights

  • JAX GPU Test Timeout Increase: The timeout for JAX GPU continuous and presubmit tests has been increased from 60 minutes to 90 minutes to prevent premature test failures due to long execution times.
  • Multi-host Orbax Checkpointing Support: The OrbaxCheckpoint callback now includes robust support for multi-host distributed training environments, specifically for the JAX backend. This involves new internal methods for detecting multi-host initialization, identifying the primary host, and synchronizing processes during checkpointing operations.
  • New Model.load() Method for Orbax Checkpoints: A new load method has been added to the Model class, allowing users to load complete model states (weights, optimizer state, and metrics state) directly from Orbax checkpoint directories. This method intelligently handles finding the latest checkpoint and adapts to JAX's distribution context for automatic data resharding.
  • Orbax Checkpoint Utility Functions: New utility functions (_is_orbax_checkpoint and _find_latest_orbax_checkpoint) have been introduced in keras/src/saving/saving_api.py to facilitate the identification and retrieval of Orbax checkpoints.
  • Comprehensive Orbax Checkpointing Tests: Extensive new tests have been added to keras/src/callbacks/orbax_checkpoint_test.py to validate the Model.load() method, verify the directory structure of distributed Orbax checkpoints, and ensure the correct functionality of multi-host synchronization mechanisms.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request increases the timeout for JAX GPU tests and adds multi-host checkpointing support for JAX using Orbax, along with a new model.load() method for Orbax checkpoints. Although the title says 'Do not review', I've reviewed the changes and they are generally well-implemented and include comprehensive tests. I have a few minor suggestions to improve code clarity and efficiency.

Comment on lines 15 to 16
# Set timeout to 60 mins from default 180 mins
timeout_mins: 60
timeout_mins: 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment on line 15 is outdated. It should be updated to reflect the new timeout value of 90 minutes.

# Set timeout to 90 mins from default 180 mins
timeout_mins: 90

Comment on lines 15 to 16
# Set timeout to 60 mins from default 180 mins
timeout_mins: 60
timeout_mins: 90
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment on line 15 is outdated. It should be updated to reflect the new timeout value of 90 minutes.

# Set timeout to 90 mins from default 180 mins
timeout_mins: 90

Comment on lines +532 to +535
zero_grads = [
backend.convert_to_tensor(np.zeros_like(v.numpy()))
for v in self.trainable_variables
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current implementation for creating zero gradients involves converting each variable to a NumPy array, creating a zero-filled NumPy array, and then converting it back to a backend tensor. This is inefficient as it involves unnecessary data transfers between devices (e.g., GPU) and the host CPU. You can achieve the same result more efficiently and concisely by using backend.zeros_like() directly on the variable.

            zero_grads = [backend.zeros_like(v) for v in self.trainable_variables]

@codecov-commenter
Copy link

codecov-commenter commented Dec 9, 2025

Codecov Report

❌ Patch coverage is 75.58140% with 21 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.30%. Comparing base (f0a48a6) to head (b8544b1).

Files with missing lines Patch % Lines
keras/src/models/model.py 64.51% 4 Missing and 7 partials ⚠️
keras/src/saving/saving_api.py 65.38% 4 Missing and 5 partials ⚠️
keras/src/callbacks/orbax_checkpoint.py 96.55% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master   #21909      +/-   ##
==========================================
- Coverage   76.30%   76.30%   -0.01%     
==========================================
  Files         580      580              
  Lines       60029    60112      +83     
  Branches     9432     9450      +18     
==========================================
+ Hits        45803    45866      +63     
- Misses      11750    11758       +8     
- Partials     2476     2488      +12     
Flag Coverage Δ
keras 76.16% <74.41%> (-0.01%) ⬇️
keras-jax 62.14% <73.25%> (+0.01%) ⬆️
keras-numpy 57.26% <13.95%> (-0.06%) ⬇️
keras-openvino 34.27% <13.95%> (-0.03%) ⬇️
keras-torch 63.21% <56.97%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@abheesht17
Copy link
Collaborator Author

This was an experimental PR. Closing this now.

@abheesht17 abheesht17 closed this Dec 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants