-
NVIDIA
- Santa Clara, CA
-
10:16
(UTC -07:00) - https://rubenohana.github.io/
- @oharub
- in/rubenohana
Stars
Code for paper "Multiple Physics Pretraining for Physical Surrogate Models
Official implementation of "Lost in Latent Space: An Empirical Study of Latent Diffusion Models for Physics Emulation"
Code for "Physics Steering: Causal Control of Cross-Domain Concepts in a Physics Foundation Model" paper.
Cosmos-Transfer2.5, built on top of Cosmos-Predict2.5, produces high-quality world simulations conditioned on multiple spatial control inputs.
Polymathic's Large Omnimodal Model for Astronomy
Cosmos-Predict2.5, the latest version of the Cosmos World Foundation Models (WFMs) family, specialized for simulating and predicting the future state of the world in the form of video.
Post-training scripts and samples for NVIDIA Cosmos ecosystem
A minimal implementation of DeepMind's Genie world model
3D Object Reconstruction project is a workflow that takes a set of stereo images and camera info and outputs a textured mesh (i.e., .OBJ file). The purpose is to translate physical items into the d…
Official implementation for the paper: InterTrack
codebase for iccv 2025 paper "One Trajectory, One Token: Grounded Video Tokenization via Panoptic Sub-object Trajectory"
A 15TB Collection of Physics Simulation Datasets
MMPS Benchmark
Schedule-Free Optimization in PyTorch
General Relativistic Neutrino Radiation Magnetohydrodynamics for Neutron Star Merger Disks
Design and analyze optimal deep learning models.
VideoSys: An easy and efficient system for video generation
Code of the paper "Listening to the Noise: Blind Denoising with Gibbs Diffusion"
Image Restoration Toolbox (PyTorch). Training and testing codes for DPIR, USRNet, DnCNN, FFDNet, SRMD, DPSR, BSRGAN, SwinIR
Simple, minimal implementation of the Mamba SSM in one file of PyTorch.
Machine Learning Engineering Open Book
Fast & Simple repository for pre-training and fine-tuning T5-style models
Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)
The simplest, fastest repository for training/finetuning medium-sized GPTs.
Official repo for consistency models.




