Skip to content

Add Qwen 3.5 to FORCE_FLOAT32#4134

Open
Etherll wants to merge 2 commits intounslothai:mainfrom
Etherll:patch22826
Open

Add Qwen 3.5 to FORCE_FLOAT32#4134
Etherll wants to merge 2 commits intounslothai:mainfrom
Etherll:patch22826

Conversation

@Etherll
Copy link
Collaborator

@Etherll Etherll commented Mar 1, 2026

No description provided.

Add dtype casting for vision encoder when using FORCE_FLOAT32.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances model compatibility and stability by addressing data type inconsistencies for specific models, particularly Qwen 3.5. It ensures that models prone to numerical issues with float16 are correctly handled by forcing float32 precision where necessary, and resolves dtype mismatches in vision encoders to maintain operational integrity.

Highlights

  • Qwen 3.5 Compatibility: Added Qwen 3.5 to the FORCE_FLOAT32 list in unsloth/models/loader.py to address potential RMSNorm overflow issues when using float16 precision, similar to Gemma3 models.
  • Vision Encoder Type Mismatch Fix: Implemented a mechanism in unsloth/models/vision.py to explicitly cast vision encoder parameters back to float32 when FORCE_FLOAT32 is enabled. This prevents dtype mismatches in layers like nn.LayerNorm or nn.Linear for models such as Qwen 3.5 and Gemma3, which receive float32 pixel values while other parameters might be float16.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • unsloth/models/loader.py
    • Added 'qwen3_5' to the FORCE_FLOAT32 list.
  • unsloth/models/vision.py
    • Introduced a conditional block to cast vision encoder parameters to torch.float32 if do_forced_float32 is active, preventing dtype mismatches.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Qwen 3.5 to use float32 precision to prevent overflows, which is a necessary change. The implementation correctly adds the model to the FORCE_FLOAT32 list and includes a patch for vision models to ensure the vision encoder remains in float32. The logic is sound. I have one minor suggestion in unsloth/models/vision.py to improve code readability by using more descriptive variable names.

Comment on lines +1062 to +1068
_vision_attrs = ("visual", "vision_tower", "vision_model", "vision_encoder")
_inner = model.model if hasattr(model, "model") else model
for _va in _vision_attrs:
_ve = getattr(_inner, _va, None)
if _ve is not None:
_ve.to(torch.float32)
break
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For improved code readability and long-term maintainability, it would be beneficial to use more descriptive variable names. For instance, _vision_attrs could be renamed to vision_tower_attributes, _inner to inner_model, _va to attr_name, and _ve to vision_tower. This makes the code's intent clearer at a glance.

Suggested change
_vision_attrs = ("visual", "vision_tower", "vision_model", "vision_encoder")
_inner = model.model if hasattr(model, "model") else model
for _va in _vision_attrs:
_ve = getattr(_inner, _va, None)
if _ve is not None:
_ve.to(torch.float32)
break
vision_tower_attributes = ("visual", "vision_tower", "vision_model", "vision_encoder")
inner_model = model.model if hasattr(model, "model") else model
for attr_name in vision_tower_attributes:
vision_tower = getattr(inner_model, attr_name, None)
if vision_tower is not None:
vision_tower.to(torch.float32)
break
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 52769a6c9a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

# mismatches in nn.LayerNorm / nn.Linear (e.g. Qwen3.5, Gemma3).
if do_forced_float32:
_vision_attrs = ("visual", "vision_tower", "vision_model", "vision_encoder")
_inner = model.model if hasattr(model, "model") else model

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Resolve vision encoder from both wrapper and inner model

The forced-float32 fix only looks under model.model when that attribute exists, so for VLM wrappers that keep the vision tower on the outer module (while .model is text-only), no vision submodule gets recast and the original float32-input/float16-weight mismatch still occurs. This means the new Qwen3.5 forced-float32 path can still fail at runtime on those wrappers, so the search should include both the outer model and inner model before giving up.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

1 participant