This repository accompanies our survey, A Comprehensive Survey on World Models for Embodied AI. World models function as internal simulators of environmental dynamics, enabling forward and counterfactual rollouts that unify perception, prediction, and control across tasks and domains. For a brief overview of survey, please refer to the two slides.: π English (PDF) Β· π Chinese (PDF)
Icon legend β π Autonomous Driving Β· π€ Robotic Manipulation Β· π§ Navigation Β· π¬ Video Generation (Indicates the predominant domain. categories are non-exclusive, e.g., robotics and driving may involve generative modeling.)
-
π€ DisWM: Disentangled World Models: Learning to Transfer Semantic Knowledge from Distracting Videos for Reinforcement Learning. [ICCV'25] [Paper] [Project Page] [Code] [Dataset]
-
π€ FOUNDER: Grounding Foundation Models in World Models for Open-Ended Embodied Decision Making. [ICML'25] [Paper] [Project Page]
-
π€ SENSEI: Semantic Exploration Guided by Foundation Models to Learn Versatile World Models. [ICML'25] [Paper] [Project Page] [Code]
-
π€ SR-AIF: Solving Sparse-Reward Robotic Tasks From Pixels with Active Inference and World Models. [ICRA'25] [Paper] [Code]
-
π€ LUMOS: Language-Conditioned Imitation Learning with World Models. [ICRA'25] [Paper] [Project Page] [Code]
-
π€ WMP: World Model-Based Perception for Visual Legged Locomotion. [ICRA'25] [Paper] [Project Page] [Code]
-
π§ X-MOBILITY: End-to-end generalizable navigation via world modeling. [ICRA'25] [Paper] [Project Page] [Code]
-
π AdaWM: Adaptive World Model based Planning for Autonomous Driving. [ICLR'25] [Paper]
-
π€ DreamerV3: Mastering diverse control tasks through world models. [Nature'25] [Paper] [Project Page] [Code]
-
π€ GLAM: Global-Local Variation Awareness in Mamba-based World Model. [AAAI'25] [Paper] [Code]
-
π€ WMR: Learning Humanoid Locomotion with World Model Reconstruction. [arXiv'25] [Paper]
-
π VL-SAFE: Vision-Language Guided Safety-Aware Reinforcement Learning with World Models for Autonomous Driving. [arXiv'25] [Paper] [Project Page] [Code] [Poster] [Video]
-
π CALL: Ego-centric Learning of Communicative World Models for Autonomous Driving. [arXiv'25] [Paper]
-
π€ Latent Policy Steering with Embodiment-Agnostic Pretrained World Models. [arXiv'25] [Paper]
-
π€ ReDRAW: Adapting World Models with Latent-State Dynamics Residuals. [arXiv'25] [Paper] [Project Page]
-
π€ OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation. [arXiv'25] [Paper]
-
π€ Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics. [arXiv'25] [Paper]
-
π€ PreLAR: World Model Pre-training with Learnable Action Representation. [ECCV'24] [Paper] [Code] [Video]
-
π€ DWL: Advancing Humanoid Locomotion: Mastering Challenging Terrains with Denoising World Model Learning. [RSS'24] [Paper]
-
π€ HRSSM: Learning Latent Dynamic Robust Representations for World Models. [ICML'24] [Paper] [Project Page] [Code] [Poster]
-
π SEM2: Enhance Sample Efficiency and Robustness of End-to-End Urban Autonomous Driving via Semantic Masked World Model. [TITS'24] [Paper]
-
π Mitigating Covariate Shift in Imitation Learning for Autonomous Vehicles Using Latent Space Generative World Models. [arXiv'24] [Paper] [Video]
-
π€ DayDreamer: World Models for Physical Robot Learning. [CoRL'22] [Paper] [Project Page] [Code]
-
π€ TransDreamer: Reinforcement Learning with Transformer World Models. [arXiv'22] [Paper] [Code]
-
π MILE: Model-Based Imitation Learning for Urban Driving. [NeurIPS'22] [Paper] [Code]
-
π€ Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models. [NeurIPS'22] [Paper] [Code]
-
π€ DreamerPro: Reconstruction-Free Model-Based Reinforcement Learning with Prototypical Representations. [ICML'22] [Paper] [Project Page] [Code]
-
π€ Dreaming: Model-based Reinforcement Learning by Latent Imagination without Reconstruction. [ICRA'21] [Paper]
-
π€ DreamerV2: Mastering Atari with Discrete World Models. [ICLR'21] [Paper] [Project Page] [Code] [Blog] [Poster]
-
π€ GLAMOR: Planning from Pixels using Inverse Dynamics Models. [ICLR'21] [Paper] [Code]
-
π€ Dreamer: Dream to Control: Learning Behaviors by Latent Imagination. [ICLR'20] [Paper] [Project Page] [Code] [Blog] [Poster]
-
π€ PlaNet: Learning Latent Dynamics for Planning from Pixels. [ICML'19] [Paper] [Project Page] [Code] [Blog] [Poster]
-
π€ Recurrent World Models Facilitate Policy Evolution. [NeurIPS'18] [Paper] [Project Page] [Video]
-
π€ EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds. [ICCV'25] [Paper] [Project Page] [Code] [Video]
-
π§ NavMorph: A Self-Evolving World Model for Vision-and-Language Navigation in Continuous Environments. [ICCV'25] [Paper] [Code]
-
π€ DyWA: Dynamics-adaptive World Action Model for Generalizable Non-prehensile Manipulation. [ICCV'25] [Paper] [Project Page] [Code]
-
π Epona: Autoregressive Diffusion World Model for Autonomous Driving. [ICCV'25] [Paper] [Project Page] [Code]
-
π€ MineDreamer: Learning to Follow Instructions via Chain-of-Imagination for Simulated-World Control. [IROS'25] [Paper] [Project Page] [Code] [Dataset]
-
π€
$\text{D}^2\text{PO}$ : World Modeling Makes a Better Planner: Dual Preference Optimization for Embodied Task Planning. [ACL'25] [Paper] [Code] [Dataset] -
π€ ReOI: Reimagination with Test-time Observation Interventions: Distractor-Robust World Model Predictions for Visual Model Predictive Control. [RSSW'25] [Paper]
-
π€ WoMAP: World Models For Embodied Open-Vocabulary Object Localization. [RSSW'25] [Paper]
-
π€ TWM: Improving Transformer World Models for Data-Efficient RL. [ICML'25] [Paper]
-
π€ TrajWorld: Trajectory World Models for Heterogeneous Environments. [ICML'25] [Paper] [Code] [Dataset]
-
π SceneDiffuser++: City-Scale Traffic Simulation via a Generative World Model. [CVPR'25] [Paper]
-
π§ NWM: Navigation World Models. [CVPR'25] [Paper] [Project Page] [Code]
-
π Learning to Drive from a World Model. [CVPRW'25] [Paper]
-
π LatentDriver: Learning Multiple Probabilistic Decisions from Latent World Model in Autonomous Driving. [ICRA'25] [Paper] [Project Page] [Code]
-
π Planning with Adaptive World Models for Autonomous Driving. [ICRA'25] [Paper]
-
π€ TWISTER: Learning Transformer-based World Models with Contrastive Predictive Coding. [ICLR'25] [Paper] [Code]
-
π€ DCWM: Discrete Codebook World Models for Continuous Control. [ICLR'25] [Paper] [Project Page] [Code] [Video]
-
π€ Object-Centric World Model for Language-Guided Manipulation. [ICLRW'25] [Paper]
-
π§ NavCoT: Boosting LLM-Based Vision-and-Language Navigation via Learning Disentangled Reasoning. [TPAMI'25] [Paper] [Code]
-
π€ Dyn-O: Building Structured World Models with Object-Centric Representations. [arXiv'25] [Paper]
-
π€ MineWorld: a Real-Time and Open-Source Interactive World Model on Minecraft. [arXiv'25] [Paper] [Code]
-
π€ EvoAgent: Self-evolving Agent with Continual World Model for Long-Horizon Tasks. [arXiv'25] [Paper]
-
π€ RoboHorizon: An LLM-Assisted Multi-View World Model for Long-Horizon Robotic Manipulation. [arXiv'25] [Paper]
-
π€ WorldVLA: Towards Autoregressive Action World Model. [arXiv'25] [Paper] [Code]
-
π FutureSightDrive: Thinking Visually with Spatio-Temporal CoT for Autonomous Driving. [arXiv'25] [Paper]
-
π€ Dyna-Think: Synergizing Reasoning, Acting, and World Model Simulation in AI Agents. [arXiv'25] [Paper]
-
π€ RIG: Synergizing Reasoning and Imagination in End-to-End Generalist Policy. [arXiv'25] [Paper]
-
π€ Language Agents Meet Causality -- Bridging LLMs and Causal World Models. [ICLR'25] [Paper] [Project Page] [Code]
-
π€ ECoT: Robotic Control via Embodied Chain-of-Thought Reasoning. [CoRL'24] [Paper] [Project Page] [Code]
-
π€ PIVOT-R: Primitive-Driven Waypoint-Aware World Model for Robotic Manipulation. [NeurIPS'24] [Paper] [Project Page] [Code]
-
π CarFormer: Self-driving with Learned Object-Centric Representations. [ECCV'24] [Paper] [Project Page] [Code]
-
π€
$\Delta$ -IRIS: Efficient World Models with Context-Aware Tokenization. [ICML'24] [Paper] [Code] -
π€ Statler: State-Maintaining Language Models for Embodied Reasoning. [ICRA'24] [Paper] [Project Page] [Code]
-
π DrivingWorld: Constructing World Model for Autonomous Driving via Video GPT. [arXiv'24] [Paper] [Project Page] [Code] [Video]
-
π Doe-1: Closed-Loop Autonomous Driving with Large World Model. [arXiv'24] [Paper] [Project Page] [Code]
-
π DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers. [arXiv'24] [Paper] [Project Page] [Code]
-
π€ TWM: Transformer-based World Models Are Happy With 100k Interactions. [ICLR'23] [Paper] [Code]
-
π€ IRIS: Transformers are Sample-Efficient World Models. [ICLR'23] [Paper] [Code]
-
π€ Inner Monologue: Embodied Reasoning through Planning with Language Models. [CoRL'22] [Paper] [Project Page] [Video]
-
π€ MWM: Masked World Models for Visual Control. [CoRL'22] [Paper] [Project Page] [Code]
-
π€ ParticleFormer: A 3D Point Cloud World Model for Multi-Object, Multi-Material Robotic Manipulation. [CoRL'25] [Paper] [Project Page]
-
π WoTE: End-to-End Driving with Online Trajectory Evaluation via BEV World Model. [ICCV'25] [Paper] [Code]
-
π§ WMNav: Integrating Vision-Language Models into World Models for Object Goal Navigation. [IROS'25] [Paper] [Project Page] [Code] [Video]
-
π€ DINO-WM: World Models on Pre-trained Visual Features enable Zero-shot Planning. [ICML'25] [Paper] [Project Page] [Code] [Dataset]
-
π RenderWorld: World Model with Self-Supervised 3D Label. [ICRA'25] [Paper]
-
π PreWorld: Semi-Supervised Vision-Centric 3D Occupancy World Model for Autonomous Driving. [ICLR'25] [Paper] [Code]
-
π SSR: Navigation-Guided Sparse Scene Representation for End-to-End Autonomous Driving. [ICLR'25] [Paper] [Code]
-
π LAW: Enhancing End-to-End Autonomous Driving with Latent World Model. [ICLR'25] [Paper] [Code]
-
π Drive-OccWorld: Driving in the Occupancy World: Vision-Centric 4D occupancy forecasting and planning via world models for autonomous driving. [AAAI'25] [Paper] [Project Page] [Code]
-
π Raw2Drive: Reinforcement Learning with Aligned World Models for End-to-End Autonomous Driving (in CARLA v2). [arXiv'25] [Paper]
-
π FASTopoWM: Fast-Slow Lane Segment Topology Reasoning with Latent World Models. [arXiv'25] [Paper] [Code]
-
RoboOccWorld: Occupancy World Model for Robots. [arXiv'25] [Paper]
-
π€ EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation. [arXiv'25] [Paper] [Project Page]
-
π DriveDreamer: Towards Real-world-driven World Models for Autonomous Driving. [ECCV'24] [Paper] [Project Page] [Code]
-
π GenAD: Generative End-to-End Autonomous Driving. [ECCV'24] [Paper] [Code] [Dataset]
-
π OccWorld: Learning a 3D Occupancy World Model for Autonomous Driving. [ECCV'24] [Paper] [Code]
-
π NeMo: Neural Volumetric World Models for Autonomous Driving. [ECCV'24] [Paper]
-
π DriveWorld: 4D pre-trained scene understanding via world models for autonomous driving. [CVPR'24] [Paper]
-
π OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving. [arXiv'24] [Paper]
-
π€ ManiGaussian++: General Robotic Bimanual Manipulation with Hierarchical Gaussian World Model. [IROS'25] [Paper] [Code]
-
π€ PIN-WM: Learning physics-informed world models for non-prehensile manipulation. [RSS'25] [Paper] [Project Page] [Code]
-
π€ PWTF: Prompting with the Future: Open-World Model Predictive Control with Interactive Digital Twins. [RSS'25] [Paper] [Project Page] [Code]
-
π€ DreMa: Dream to Manipulate: Compositional World Models Empowering Robot Imitation Learning with Imagination. [ICLR'25] [Paper] [Project Page] [Code]
-
π€ GAF: Gaussian Action Field as a 4D Representation for Dynamic World Modeling in Robotic Manipulation. [arXiv'25] [Paper] [Project Page]
-
π DTT: Delta-Triplane Transformers as Occupancy World Models. [arXiv'25] [Paper]
-
π€ Physically Embodied Gaussian Splatting: A Realtime Correctable World Model for Robotics. [CoRL'24] [Paper] [Project Page] [Code] [Dataset]
-
π€ ManiGaussian: Dynamic Gaussian Splatting for Multi-task Robotic Manipulation. [ECCV'24] [Paper] [Project Page] [Code]
-
π€
$\text{DexSim2Real}^{2}$ : Building Explicit World Model for Precise Articulated Object Dexterous Manipulation. [arXiv'24] [Paper] [Project Page] [Code] [Video]
-
π€ LaDi-WM: A Latent Diffusion-based World Model for Predictive Manipulation. [CoRL'25] [Paper] [Project Page] [Code]
-
π€ FLARE: Robot Learning with Implicit World Modeling. [RSSW'25] [Paper] [Project Page] [Code]
-
π GeoDrive: 3D Geometry-Informed Driving World Model with Precise Action Control. [arXiv'25] [Paper] [Code]
-
π€ villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models. [arXiv'25] [Paper] [Project Page] [Code]
-
π€ VidMan: Exploiting Implicit Dynamics from Video Diffusion Model for Effective Robot Manipulation. [NeurIPS'24] [Paper]
-
π TOKEN: Tokenize the World into Object-level Knowledge to Address Long-tail Events in Autonomous Driving. [CoRL'24] [Paper] [Project Page]
-
π€ TesserAct: Learning 4D Embodied World Models. [ICCV'25] [Paper] [Project Page] [Code]
-
π World4Drive: End-to-End Autonomous Driving via Intention-aware Physical Latent World Model. [ICCV'25] [Paper] [Code]
-
π Imagine-2-Drive: Leveraging High-Fidelity World Models via Multi-Modal Diffusion Policies. [IROS'25] [Paper] [Project Page] [Video]
-
π€ COMBO: Compositional World Models for Embodied Multi-Agent Cooperation. [ICLR'25] [Paper] [Project Page] [Code]
-
π€ EmbodieDreamer: Advancing Real2Sim2Real Transfer for Policy Training via Embodied World Modeling. [arXiv'25] [Paper] [Project Page] [Code]
-
π€ ManipDreamer: Boosting Robotic Manipulation World Model with Action Tree and Visual Guidance. [arXiv'25] [Paper]
-
π€ 3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World Model. [arXiv'25] [Paper] [Code]
-
π€ RoboDreamer: Learning Compositional World Models for Robot Imagination. [ICML'24] [Paper] [Project Page] [Code]
-
π Drive-WM: Driving into the Future: Multiview Visual Forecasting and Planning with World Model for Autonomous Driving. [CVPR'24] [Paper] [Project Page] [Code]
-
π DFIT-OccWorldοΌAn Efficient Occupancy World Model via Decoupled Dynamic Flow and Image-assisted Training. [arXiv'24] [Paper]
-
π Orbis: Overcoming Challenges of Long-Horizon Prediction in Driving World Models. [NeurIPS'25] [Paper] [Project Page] [Code]
-
π€ RLVR-World: Training World Models with Reinforcement Learning. [NeurIPS'25] [Paper] [Project Page] [Code] [Dataset]
-
π DriVerse: Navigation World Model for Driving Simulation via Multimodal Trajectory Prompting and Motion Alignment. [ACMMM'25] [Paper] [Code]
-
π€ Long-Context State-Space Video World Models. [ICCV'25] [Paper] [Project Page]
-
π World model-based end-to-end scene generation for accident anticipation in autonomous driving. [Nat. Commun. Eng.'25] [Paper] [Code] [Dataset]
-
π€ EVA: Empowering World Models with Reflection for Embodied Video Prediction. [ICML'25] [Paper] [Project Page]
-
π€ AdaWorld: Learning Adaptable World Models with Latent Actions. [ICML'25] [Paper] [Project Page] [Code]
-
π¬ DINO-World: Back to the Features: DINO as a Foundation for Video World Models. [arXiv'25] [Paper]
-
π€ RoboScape: Physics-informed Embodied World Model. [arXiv'25] [Paper] [Code]
-
π¬ Yume: An Interactive World Generation Model. [arXiv'25] [Paper] [Project Page] [Code] [Video] [Dataset]
-
π€ World4Omni: A Zero-Shot Framework from Image Generation World Model to Robotic Manipulation. [arXiv'25] [Paper] [Project Page]
-
π€ Vid2World: Crafting Video Diffusion Models to Interactive World Models. [arXiv'25] [Paper] [Project Page]
-
π¬ Geometry Forcing: Marrying Video Diffusion and 3D Representation for Consistent World Modeling. [arXiv'25] [Paper] [Project Page] [Code]
-
π¬ DeepVerse: 4D Autoregressive Video Generation as a World Model. [arXiv'25] [Paper] [Project Page] [Code]
-
π€ VRAG: Learning World Models for Interactive Video Generation. [arXiv'25] [Paper]
-
π€ StateSpaceDiffuser: Bringing Long Context to Diffusion World Models. [arXiv'25] [Paper]
-
π LongDWM: Cross-Granularity Distillation for Building a Long-Term Driving World Model. [arXiv'25] [Paper] [Project Page] [Code]
-
π MiLA: Multi-view Intensive-fidelity Long-term Video Generation World Model for Autonomous Driving. [arXiv'25] [Paper] [Code]
-
π€ S2-SSM: Learning Local Causal World Models with State Space Models and Attention. [arXiv'25] [Paper]
-
π€ WorldGym: World Model as An Environment for Policy Evaluation. [arXiv'25] [Paper] [Project Page] [Code] [Demo]
-
π€ WorldEval: World Model as Real-World Robot Policies Evaluator. [arXiv'25] [Paper] [Project Page] [Code]
-
π€ World-in-World: World Models in a Closed-Loop World. [arXiv'25] [Paper] [Project Page] [Code] [Dataset]
-
π€ iVideoGPT: Interactive VideoGPTs are Scalable World Models. [NeurIPS'24] [Paper] [Project Page] [Code] [Poster]
-
π¬ Genie: Generative Interactive Environments. [ICML'24] [Paper] [Code]
-
π GenAD: Generalized Predictive Model for Autonomous Driving. [CVPR'24] [Paper] [Dataset] [Poster] [Video]
-
π¬ Owl-1: Omni World Model for Consistent Long Video Generation. [arXiv'24] [Paper] [Code]
-
π¬ Pandora: Towards General World Model with Natural Language Actions and Video States. [arXiv'24] [Paper] [Project Page] [Code] [Video]
-
π InfinityDrive: Breaking Time Limits in Driving World Models. [arXiv'24] [Paper] [Project Page]
- π§ PACT: Perception-Action Causal Transformer for Autoregressive Robotics Pre-Training. [IROS'23] [Paper]
-
π STAGE: A Stream-Centric Generative World Model for Long-Horizon Driving-Scene Simulation. [IROS'25] [Paper] [Project Page]
-
π¬ GEM: A Generalizable Ego-Vision Multimodal World Model for Fine-Grained Ego-Motion, Object Dynamics, and Scene Composition Control. [CVPR'25] [Paper] [Project Page] [Code]
-
π LidarDM: Generative LiDAR Simulation in a Generated World. [ICRA'25] [Paper] [Project Page] [Code]
-
π¬ FOLIAGE: Towards Physical Intelligence World Models Via Unbounded Surface Evolution. [arXiv'25] [Paper]
-
π§ MindJourney: Test-Time Scaling with World Models for Spatial Reasoning. [arXiv'25] [Paper] [Project Page] [Code]
-
π§ Learning 3D Persistent Embodied World Models. [arXiv'25] [Paper]
-
π Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability. [NeurIPS'24] [Paper] [Project Page] [Code]
-
π Copilot4D: Learning Unsupervised World Models for Autonomous Driving via Discrete Diffusion. [ICLR'24] [Paper]
-
π ViDAR: Visual Point Cloud Forecasting enables Scalable Autonomous Driving. [CVPR'24] [Paper] [Code]
-
π DOME: Taming Diffusion Model into High-Fidelity Controllable Occupancy World Model. [arXiv'24] [Paper] [Project Page] [Code]
-
π Delphi: Unleashing Generalization of End-to-End Autonomous Driving with Controllable Long Video Generation. [arXiv'24] [Paper] [Project Page] [Code]
- π¬ PhyDNet: Disentangling Physical Dynamics From Unknown Factors for Unsupervised Video Prediction. [CVPR'20] [Paper] [Code]
-
π InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models. [ICCV'25] [Paper] [Project Page] [Code]
-
π GaussianWorld: Gaussian World Model for Streaming 3D Occupancy Prediction. [CVPR'25] [Paper] [Code]
-
π¬ Video World Models with Long-term Spatial Memory. [arXiv'25] [Paper] [Project Page]
-
π¬ MarsGen: Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions. [NeurIPS'25] [Paper] [Project Page] [Code] [Dataset]
-
π MaskGWM: A Generalizable Driving World Model with Video Mask Reconstruction. [CVPR'25] [Paper] [Code] [Video]
-
π¬ EchoWorld: Learning Motion-Aware World Models for Echocardiography Probe Guidance. [CVPR'25] [Paper] [Code]
-
π¬ V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning. [arXiv'25] [Paper] [Project Page] [Code] [Blog]
-
π AD-L-JEPA: Self-Supervised Representation Learning with Joint Embedding Predictive Architecture for Automotive LiDAR Object Detection. [arXiv'25] [Paper]
-
π¬ AirScape: An Aerial Generative World Model with Motion Controllability. [arXiv'25] [Paper] [Project Page]
-
π€ ForeDiff: Consistent World Models via Foresight Diffusion. [arXiv'25] [Paper]
-
π¬ V-JEPA: Revisiting Feature Prediction for Learning Visual Representations from Video. [TMLR'24] [Paper] [Code] [Blog] [Video]
-
π¬ WorldDreamer: Towards General World Models for Video Generation via Predicting Masked Tokens. [arXiv'24] [Paper] [Project Page] [Code]
-
π¬ Sora: Video generation models as world simulators. [OpenAI'24] [Project Page]
-
π HERMES: A Unified Self-Driving World Model for Simultaneous 3D Scene Understanding and Generation. [ICCV'25] [Paper] [Project Page] [Code]
-
π¬ Aether: Geometric-Aware Unified World Modeling. [ICCV'25] [Paper] [Project Page] [Code]
-
π PosePilot: Steering Camera Pose for Generative World Models with Self-supervised Depth. [IROS'25] [Paper]
-
π DynamicCity: Large-Scale 4D Occupancy Generation from Dynamic Scenes. [ICLR'25] [Paper] [Project Page] [Code]
-
π DriveDreamer-2: LLM-Enhanced World Models for Diverse Driving Video Generation. [AAAI'25] [Paper] [Project Page] [Code]
-
π UniFuture: Seeing the Future, Perceiving the Future: A Unified Driving World Model for Future Generation and Perception. [arXiv'25] [Paper] [Project Page] [Code]
-
π Towards foundational LiDAR world models with efficient latent flow matching. [arXiv'25] [Paper] [Project Page]
-
π COME: Adding Scene-Centric Forecasting Control to Occupancy World Model. [arXiv'25] [Paper] [Code]
-
π€ Geometry-aware 4D Video Generation for Robot Manipulation. [arXiv'25] [Paper] [Project Page] [Code] [Dataset]
-
π EOT-WM: Other Vehicle Trajectories Are Also Needed: A Driving World Model Unifies Ego-Other Vehicle Trajectories in Video Latent Space. [arXiv'25] [Paper]
-
π€ ORV: 4D Occupancy-centric Robot Video Generation. [arXiv'25] [Paper] [Project Page] [Code]
-
π Cam4DOcc: Benchmark for Camera-Only 4D Occupancy Forecasting in Autonomous Driving Applications. [CVPR'24] [Paper] [Code]
-
π BEVWorld: A Multimodal World Simulator for Autonomous Driving via Scene-Level BEV Latents. [arXiv'24] [Paper]
-
π OccSora: 4D Occupancy Generation Models as World Simulators for Autonomous Driving. [arXiv'24] [Paper] [Code]
-
π DrivePhysica: Physical Informed Driving World Model. [arXiv'24] [Paper] [Code]
-
π Point Cloud Forecasting as a Proxy for 4D Occupancy Forecasting. [CVPR'23] [Paper] [Project Page] [Code] [Video]
-
π Differentiable Raycasting for Self-Supervised Occupancy Forecasting. [ECCV'22] [Paper] [Project Page] [Code] [Video]
-
π Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks. [CoRL'21] [Paper] [Code]
-
π DriveDreamer4D: World Models Are Effective Data Machines for 4D Driving Scene Representation. [CVPR'25] [Paper] [Project Page] [Code]
-
π ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration. [CVPR'25] [Paper] [Project Page] [Code]
-
π UnO: Unsupervised Occupancy Fields for Perception and Forecasting. [CVPR'24] [Paper]
-
π MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes. [arXiv'24] [Paper] [Project Page] [Code]