Fix DDP "marked ready twice" for VLMs with CPU offload + TiledMLP#4077
Fix DDP "marked ready twice" for VLMs with CPU offload + TiledMLP#4077nepfaff wants to merge 4 commits intounslothai:mainfrom
Conversation
for more information, see https://pre-commit.ci
Summary of ChangesHello @nepfaff, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves critical DDP compatibility issues for Vision Language Models (VLMs) that utilize CPU offloading and TiledMLP. The changes ensure that Unsloth's memory optimizations are maintained while enabling stable multi-GPU training by preventing the 'marked ready twice' error. These fixes are specifically applied when Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
The pull request introduces two critical fixes for Distributed Data Parallel (DDP) compatibility in Vision-Language Models (VLMs). First, it implements non-reentrant gradient checkpointing with CPU activation offloading using saved_tensors_hooks. This restores memory efficiency that was previously lost when switching to non-reentrant checkpointing for DDP safety. Second, it patches the TiledMLP backward pass to be DDP-safe by using functional torch.autograd.grad for all but the last sequence chunk, ensuring that DDP hooks (which trigger gradient reduction) are only fired once during the final .backward() call. These changes are correctly gated behind is_distributed() to avoid impacting single-GPU performance.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 93c5aca876
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
for more information, see https://pre-commit.ci
|
Hi @nepfaff thanks for the PR. We've been working on a non reentrant checkpointer which should perform similarly to the reentrant version. I haven't yet started the TiledMLP portion yet. It would be cool to collab on that piece if you're interested. That would be a change to unsloth-zoo/tiled_mlp.py and ultimately there would need to be some coordination to match checkpointing and tiled mlp reentrant approaches. |
|
That sounds exciting! I'm unsure about how much time I will have, but I'd be happy to help out. |
|
@nepfaff Are you on our discord? If you could ping me there I'm doublemathew and we can discuss further. |
Improves the existing DDP compatibility block (from PR #3751) with two targeted fixes that preserve Unsloth's memory optimizations:
Non-reentrant checkpointing with CPU activation offloading via saved_tensors_hooks. PR Fix VLM + DDP checkpointing #3751 switched to non-reentrant but dropped CPU offloading entirely.
DDP-safe TiledMLP backward: uses functional torch.autograd.grad() for all-but-last sequence chunk (no DDP hooks fired), then .backward() for the final chunk (fires hooks exactly once).
Both fixes are gated behind is_distributed(), so single-GPU training is completely unaffected.
Tested on Qwen3-VL-4B + LoRA with 8x L40S GPUs. These changes successfully enabled multi-GPU training.