- Los Angeles
- https://bell0o.github.io/
- in/ma-siyu
Highlights
- Pro
Lists (4)
Sort Name ascending (A-Z)
Starred repositories
PointWorld: Scaling 3D World Models for In-The-Wild Robotic Manipulation
[CoRL 2025 Best Paper Award] Fabrica: Dual-Arm Assembly of General Multi-Part Objects via Integrated Planning and Learning
The repository provides code for running inference with the SAM 3D Body Model (3DB), links for downloading the trained model checkpoints and datasets, and example notebooks that show how to use theā¦
Repository for our papers: Robotic World Model: A Neural Network Simulator for Robust Policy Optimization in Robotics and Uncertainty-Aware Robotic World Model Makes Offline Model-Based Reinforcemeā¦
Lightwheel-YCB is a high-quality simulation asset benchmark built upon the YCB Benchmarks - Object and Model Set. It features 125 meticulously crafted simulation-ready assets across three categorieā¦
A comprehensive list of papers for the definition of World Models and using World Models for General Video Generation, Embodied AI, and Autonomous Driving, including papers, codes, and related websā¦
Learning Dexterous Manipulation Skills from Imperfect Simulations
NVIDIA Isaac Sim⢠is an open-source application on NVIDIA Omniverse for developing, simulating, and testing AI-driven robots in realistic virtual environments.
A library for creating Gym environments with unified API to various physics simulators
Tactile Sensing ⢠Simulation ⢠Representation ⢠Manipulation ⢠IL/RL/VLA ⢠Open Source
Official code release for CoRL'25 paper: VT-Refine: Learning Bimanual Assembly with Visuo-Tactile Feedback via Simulation Fine-Tuning
[RSS 2025] "ASAP: Aligning Simulation and Real-World Physics for Learning Agile Humanoid Whole-Body Skills"
RLinf: Reinforcement Learning Infrastructure for Embodied and Agentic AI
Collection of MuJoCo robotics environments equipped with both vision and tactile sensing
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
Benchmarking Knowledge Transfer in Lifelong Robot Learning
[ICLR 2026] SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Reference PyTorch implementation and models for DINOv3
[IROS 2025] Official repository of GRIP: A General Robotic Incremental Potential Contact Simulation Dataset for Unified Deformable-Rigid Coupled Grasping.
The code for controlling the Piper robotic arm