- I’m a second-year master student at IIGROUP in Tsinghua University, supervised by Prof. Yujiu Yang.
- I'm now a research intern at ByteDance Seed, focusing on Multimodal Post-Training.
- I'm worked closely with Dr. Ling Yang and Prof. Mengdi Wang from the AI Lab at Princeton University.
- My research interests lie in Reinforcement Learning and Unified Multimodal Models.
Multimodal Researcher @ByteDance-Seed
-
Tsinghua University
- Beijing
-
16:45
(UTC +08:00) - https://cominclip.github.io/
Pinned Loading
-
OmniVerifier
OmniVerifier PublicGenerative Universal Verifier as Multimodal Meta-Reasoner
-
YangLing0818/IterComp
YangLing0818/IterComp Public[ICLR 2025] IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
-
YangLing0818/RPG-DiffusionMaster
YangLing0818/RPG-DiffusionMaster Public[ICML 2024] Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs (RPG)
-
YangLing0818/RealCompo
YangLing0818/RealCompo Public[NeurIPS 2024] RealCompo: Balancing Realism and Compositionality Improves Text-to-Image Diffusion Models
-
Gen-Verse/HermesFlow
Gen-Verse/HermesFlow Public[NeurIPS 2025] HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation
-
BoxDiff-XL
BoxDiff-XL PublicExtend BoxDiff to SDXL (SDXL-based layout-to-image generation)
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.
