Skip to content

Fix tool calling compatibility for Llama 3.2 and Phi-4#4038

Open
VedantMadane wants to merge 5 commits intounslothai:mainfrom
VedantMadane:fix-tool-calling-compat
Open

Fix tool calling compatibility for Llama 3.2 and Phi-4#4038
VedantMadane wants to merge 5 commits intounslothai:mainfrom
VedantMadane:fix-tool-calling-compat

Conversation

@VedantMadane
Copy link

Fixes #3092.

This PR addresses the tool calling compatibility issues reported with Llama 3.2, Phi-4, and Mistral models.

Key Changes:

  1. Compatibility Patch: Added patch_transformers_cfg() in import_fixes.py to monkey-patch ransformers-cfg with better model detection and fallback to auto-inference.
  2. New Helper: Introduced generate_with_grammar() in unsloth/grammars.py to provide a robust, model-agnostic way to use grammar-constrained generation.
  3. Integration: Automatically applies the patch and exports the helper function when unsloth is imported.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @VedantMadane, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances Unsloth's tool calling capabilities by resolving compatibility issues with several large language models, including Llama 3.2, Phi-4, and Mistral. It achieves this by improving how grammar-constrained generation is handled and integrating a new, more robust generation helper, ensuring broader support and stability for advanced generation tasks.

Highlights

  • Compatibility Patch for transformers-cfg: Introduced patch_transformers_cfg() in import_fixes.py to monkey-patch transformers-cfg, improving model detection and providing fallback to auto-inference for models like Llama 3.2, Phi-4, and Mistral.
  • New Grammar-Constrained Generation Helper: Added generate_with_grammar() in unsloth/grammars.py to offer a robust, model-agnostic method for grammar-constrained text generation.
  • Automatic Integration: The new patch and helper function are automatically applied and exported upon importing unsloth.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • unsloth/init.py
    • Imported patch_transformers_cfg from import_fixes.py.
    • Called patch_transformers_cfg() during initialization.
    • Removed patch_transformers_cfg from the global namespace after execution.
    • Imported all functions from the new grammars module.
  • unsloth/grammars.py
    • Added a new module for grammar-constrained generation.
    • Defined JSON_ARR_GBNF for a simple JSON array grammar.
    • Implemented generate_with_grammar function to provide model-agnostic grammar-constrained text generation using transformers-cfg.
  • unsloth/import_fixes.py
    • Added patch_transformers_cfg function to monkey-patch transformers-cfg's Token2ByteMapping.from_hf_tokenizer.
    • Implemented logic within the patch to provide better detection and fallback for Phi, Llama 3.x, and Mistral/Mixtral models.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 2238aabb83

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

("," ws value)*
)? "]" ws
string ::=
""" (

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 Badge Escape triple quotes in JSON_ARR_GBNF

JSON_ARR_GBNF contains a raw triple-quoted Python string with an unescaped """ token sequence, so the parser treats the string as terminated early and unsloth/grammars.py raises SyntaxError: unterminated string literal. Since unsloth/__init__.py now imports this module unconditionally, any import unsloth fails even when grammar-constrained generation is never used.

Useful? React with 👍 / 👎.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces compatibility fixes for tool calling with newer models like Llama 3.2 and Phi-4. It achieves this by monkey-patching transformers-cfg for better model detection and adding a new generate_with_grammar helper function. The changes are well-implemented. I've provided a couple of suggestions to improve robustness: one to enhance the fallback logic in the new grammar generation function to better preserve user settings, and another to make exception handling more specific in the patching logic. Overall, this is a great contribution to improve model compatibility.

Comment on lines 139 to 145
logger.warning("Unsloth: Generation failed with model_kwargs error. Retrying with minimal parameters...")
minimal_kwargs = {
"input_ids": input_ids,
"max_new_tokens": max_new_tokens,
"logits_processor": [grammar_processor],
}
return model.generate(**minimal_kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The fallback logic for model.generate is a bit too aggressive in simplifying the parameters. When a ValueError with 'model_kwargs' occurs, the current implementation falls back to a very minimal set of arguments, discarding the user's sampling configuration (like do_sample, temperature, top_p, top_k, repetition_penalty, and num_return_sequences). This forces greedy decoding, which might not be the user's intent.

A better approach would be to retry generation with all the original parameters except for the extra **kwargs that caused the issue. This preserves the intended generation strategy (sampling vs. greedy) while working around the model-specific parameter incompatibility.

            logger.warning("Unsloth: Generation failed with model_kwargs error. Retrying without extra keyword arguments...")
            minimal_kwargs = {
                "input_ids": input_ids,
                "max_new_tokens": max_new_tokens,
                "do_sample": do_sample,
                "repetition_penalty": repetition_penalty,
                "num_return_sequences": num_return_sequences,
                "logits_processor": [grammar_processor],
            }
            if do_sample:
                if temperature is not None: minimal_kwargs["temperature"] = temperature
                if top_p is not None: minimal_kwargs["top_p"] = top_p
                if top_k is not None: minimal_kwargs["top_k"] = top_k
            return model.generate(**minimal_kwargs)
@VedantMadane VedantMadane force-pushed the fix-tool-calling-compat branch from 943e9e2 to 50c1804 Compare February 12, 2026 09:09
Copy link
Contributor

@danielhanchen danielhanchen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tested on 1x NVIDIA B200 (CUDA 12.8, torch 2.9.1, transformers 4.57.6) with 5 notebooks:

Notebook Result
Llama3.1_8B_Alpaca PASS
Llama3.2_1B_and_3B_Conversational PASS
Phi_4_Conversational PASS
Mistral_v0.3_7B_Conversational PASS
Qwen3_14B_Reasoning_Conversational PASS

All 5 notebooks pass. No regressions from the tool calling changes.

Code review notes:

  1. Dead import: AutoTokenizer is imported in grammars.py but never used.
  2. The GBNF grammar has over-escaped characters (e.g. \\\" where \" would suffice). Functionally correct but could be simplified.
  3. Model matching via name_or_path substring is fragile -- consider checking the model's model_type from config instead.
  4. Unused typing imports (Optional, Union) in grammars.py.

These are all non-blocking style issues. The core functionality works.

…ai#3092)

- Added compatibility patch for transformers-cfg in import_fixes.py\n- Added generate_with_grammar helper in grammars.py\n- Exported generate_with_grammar in __init__.py
@VedantMadane VedantMadane force-pushed the fix-tool-calling-compat branch from bee49bf to b92bc8f Compare February 28, 2026 13:51
@Datta0
Copy link
Collaborator

Datta0 commented Mar 2, 2026

Hey @VedantMadane can you please undo all the styling chances like spaces etc so that it makes it easier to review?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

3 participants