-
Notifications
You must be signed in to change notification settings - Fork 347
Add Function Calling Fine-tuning LLMs on xLAM Dataset notebook #321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Function Calling Fine-tuning LLMs on xLAM Dataset notebook #321
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Thanks for your contribution! My first impression is that it is very code-heavy without really any supporting text that explains what is happening and the rationale behind certain decisions. Breaking up these code blocks will make it easier for users to digest. Also pinging @sergiopaniego, our recipe chef, for any other additional suggestions ❤️ |
Hi @stevhliu, Thank you for the feedback. |
Sorry I wasn't clear! Yes, a general explanation for each step would be nice. You don't have to go too in-depth explaining why you selected specific parameters (unless its important), but the user should be able to read a paragraph and have a good idea of what is happening at a step. |
No worries. I'll make the updates based on your comments and submit the pull request soon. :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the effort!! 😃 Following the same ideas suggested by @stevhliu and similar to #319:
Code blocks should be divided into smaller sections and explained. We don’t need an in-depth breakdown of every parameter, but rather an explanation of the problem we’re trying to solve and why each function or block of code is necessary.
A recipe should be aimed at readers who want to learn more about a specific technique or package, so the focus should be more educational rather than simply presenting a complete project with a lot of code. You can also reference other recipes to provide additional context and insights.
- Dense code blocks that need breaking up - Missing explanatory text between sections - Large import block needs splitting - ModelConfig/TrainingConfig needs simplification - Indentation issues in process_xlam_sample function - Need to remove <small> tags and subsections
- Change max_seq_length to max_length parameter in SFTConfig instantiation - Resolves TypeError when running train_qlora_model function - Maintains compatibility with TRL library API requirements
26ee1bc
to
a74ce9a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the iteration! We'd need to update the toctree and index with the new notebook too
…tputs - Add detailed explanations for dataset processing functions (process_xlam_sample, load_and_process_xlam_dataset, preview_dataset_sample) - Document rationale for Llama 3-8B-Instruct model selection with performance/resource balance reasoning - Include execution outputs showing successful environment setup and model testing - Add .env to gitignore for environment variable security - Update package installation commands to use uv pip for faster dependency management - Demonstrate complete workflow from setup through testing with comprehensive function calling examples The notebook now provides clearer guidance on the xLAM dataset processing pipeline and model selection rationale while maintaining full functionality for QLoRA fine-tuning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update!
Could you update the toctree and index files?
- Add function calling fine-tuning notebook to _toctree.yml under LLM Recipes section - Feature notebook in index.md latest notebooks section for discoverability - Enables users to find the xLAM dataset function calling tutorial through cookbook navigation The notebook is now properly integrated into the cookbook structure and discoverable through standard navigation paths.
Done. :) |
Clean up latest notebooks list to highlight most recent additions while maintaining focus on current relevant content.
Ensure both function calling notebook and existing T5 PEFT notebook are properly listed in latest notebooks section to maintain compatibility with main branch while adding new content.
Remove redundant entries and improve formatting consistency in the latest notebooks list.
Include both function calling notebook and T5 PEFT notebook in the latest notebooks section for complete coverage of recent additions.
Keep function calling notebook and existing structure for clean merge compatibility.
Maintain current version without T5 PEFT entry as requested.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! 🚀
Summary
This notebook demonstrates how to fine-tune language models for function calling capabilities using the xLAM dataset from Salesforce and QLoRA technique.
Key Features
Technical Details
Contribution Guidelines Compliance
function_calling_fine_tuning_llms_on_xlam.ipynb
_toctree.yml
in LLM Recipes sectionindex.md
in Latest notebooks sectionTest Plan
✅ All contribution guidelines followed according to the README
@merveenoyan , @stevhliu