Skip to content

Conversation

Jintao-Huang
Copy link
Collaborator

No description provided.

Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request modifies the default behavior for dataset caching by setting load_from_cache_file to False by default. This change aims to give developers more explicit control over when cached datasets are used. To support this, the implementation now explicitly defines cache directories for dataset loading and preprocessing operations, ensuring that caching still functions correctly when enabled. The accompanying documentation has been updated to guide users on the recommended settings for this parameter in different scenarios.

Highlights

  • Default Cache Behavior: The default value for the load_from_cache_file parameter has been changed from True to False in both the configuration and documentation, providing more control over dataset caching.
  • Explicit Cache Directory Management: Explicit cache directories are now set for Hugging Face dataset loading (hf_load_dataset) and mapping operations (dataset.map) using get_cache_dir(), ensuring consistent and predictable caching behavior.
  • Documentation Updates: The documentation (both Chinese and English versions) has been updated to reflect the new default value for load_from_cache_file and provides recommendations for its usage during actual runs versus debugging.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the default value of load_from_cache_file to False and adjusts the related documentation accordingly. It also centralizes the cache directory for datasets. My review identifies a significant issue in swift/llm/dataset/preprocessor/core.py where a cache file name is incorrectly reused across different map operations, which could lead to cache corruption or incorrect behavior. I have provided a refactoring suggestion to address this bug and eliminate code duplication, improving both correctness and maintainability.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the default behavior of load_from_cache_file to False, which is a good change for encouraging more explicit control over dataset caching, especially during debugging. The extensive updates to documentation and example scripts to align with this new default are thorough and well-executed. I've identified a potential bug in the updated caching logic within swift/llm/dataset/preprocessor/core.py, where a variable for a cache file name might be incorrectly reused across different map operations. I've provided a specific comment with a suggested fix to address this. Overall, this is a valuable pull request that improves the framework's robustness.

Comment on lines +341 to +343
if not dataset.cache_files:
cache_file_name = os.path.join(get_cache_dir(), 'datasets', 'map_cache',
f'{dataset._fingerprint}.arrow')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The cache_file_name variable is being reused from a previous map operation. If the first map operation (for __#solution) runs and populates the cache, this variable will hold the cache file path from that operation. When the code reaches this second map operation, if the dataset object's cache_files attribute is already populated, the if condition will be false, and the stale cache_file_name will be passed to this map call. This is incorrect as it could lead to using a wrong cache file. To fix this, cache_file_name should be reset to None before this block to ensure it's handled correctly for this specific map operation.

                cache_file_name = None
                if not dataset.cache_files:
                    cache_file_name = os.path.join(get_cache_dir(), 'datasets', 'map_cache',
                                                   f'{dataset._fingerprint}.arrow')

@Jintao-Huang Jintao-Huang merged commit b32e9f3 into modelscope:main Sep 25, 2025
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants