Fix tool calling compatibility for Llama 3.2 and Phi-4#4038
Fix tool calling compatibility for Llama 3.2 and Phi-4#4038VedantMadane wants to merge 5 commits intounslothai:mainfrom
Conversation
Summary of ChangesHello @VedantMadane, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances Unsloth's tool calling capabilities by resolving compatibility issues with several large language models, including Llama 3.2, Phi-4, and Mistral. It achieves this by improving how grammar-constrained generation is handled and integrating a new, more robust generation helper, ensuring broader support and stability for advanced generation tasks. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2238aabb83
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
unsloth/grammars.py
Outdated
| ("," ws value)* | ||
| )? "]" ws | ||
| string ::= | ||
| """ ( |
There was a problem hiding this comment.
Escape triple quotes in JSON_ARR_GBNF
JSON_ARR_GBNF contains a raw triple-quoted Python string with an unescaped """ token sequence, so the parser treats the string as terminated early and unsloth/grammars.py raises SyntaxError: unterminated string literal. Since unsloth/__init__.py now imports this module unconditionally, any import unsloth fails even when grammar-constrained generation is never used.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Code Review
This pull request introduces compatibility fixes for tool calling with newer models like Llama 3.2 and Phi-4. It achieves this by monkey-patching transformers-cfg for better model detection and adding a new generate_with_grammar helper function. The changes are well-implemented. I've provided a couple of suggestions to improve robustness: one to enhance the fallback logic in the new grammar generation function to better preserve user settings, and another to make exception handling more specific in the patching logic. Overall, this is a great contribution to improve model compatibility.
unsloth/grammars.py
Outdated
| logger.warning("Unsloth: Generation failed with model_kwargs error. Retrying with minimal parameters...") | ||
| minimal_kwargs = { | ||
| "input_ids": input_ids, | ||
| "max_new_tokens": max_new_tokens, | ||
| "logits_processor": [grammar_processor], | ||
| } | ||
| return model.generate(**minimal_kwargs) |
There was a problem hiding this comment.
The fallback logic for model.generate is a bit too aggressive in simplifying the parameters. When a ValueError with 'model_kwargs' occurs, the current implementation falls back to a very minimal set of arguments, discarding the user's sampling configuration (like do_sample, temperature, top_p, top_k, repetition_penalty, and num_return_sequences). This forces greedy decoding, which might not be the user's intent.
A better approach would be to retry generation with all the original parameters except for the extra **kwargs that caused the issue. This preserves the intended generation strategy (sampling vs. greedy) while working around the model-specific parameter incompatibility.
logger.warning("Unsloth: Generation failed with model_kwargs error. Retrying without extra keyword arguments...")
minimal_kwargs = {
"input_ids": input_ids,
"max_new_tokens": max_new_tokens,
"do_sample": do_sample,
"repetition_penalty": repetition_penalty,
"num_return_sequences": num_return_sequences,
"logits_processor": [grammar_processor],
}
if do_sample:
if temperature is not None: minimal_kwargs["temperature"] = temperature
if top_p is not None: minimal_kwargs["top_p"] = top_p
if top_k is not None: minimal_kwargs["top_k"] = top_k
return model.generate(**minimal_kwargs)943e9e2 to
50c1804
Compare
danielhanchen
left a comment
There was a problem hiding this comment.
Tested on 1x NVIDIA B200 (CUDA 12.8, torch 2.9.1, transformers 4.57.6) with 5 notebooks:
| Notebook | Result |
|---|---|
| Llama3.1_8B_Alpaca | PASS |
| Llama3.2_1B_and_3B_Conversational | PASS |
| Phi_4_Conversational | PASS |
| Mistral_v0.3_7B_Conversational | PASS |
| Qwen3_14B_Reasoning_Conversational | PASS |
All 5 notebooks pass. No regressions from the tool calling changes.
Code review notes:
- Dead import:
AutoTokenizeris imported ingrammars.pybut never used. - The GBNF grammar has over-escaped characters (e.g.
\\\"where\"would suffice). Functionally correct but could be simplified. - Model matching via
name_or_pathsubstring is fragile -- consider checking the model'smodel_typefrom config instead. - Unused typing imports (
Optional,Union) ingrammars.py.
These are all non-blocking style issues. The core functionality works.
…ai#3092) - Added compatibility patch for transformers-cfg in import_fixes.py\n- Added generate_with_grammar helper in grammars.py\n- Exported generate_with_grammar in __init__.py
…c, and use specific exceptions
bee49bf to
b92bc8f
Compare
for more information, see https://pre-commit.ci
|
Hey @VedantMadane can you please undo all the styling chances like spaces etc so that it makes it easier to review? |
Fixes #3092.
This PR addresses the tool calling compatibility issues reported with Llama 3.2, Phi-4, and Mistral models.
Key Changes: