-
Tongji University / Tsinghua University
- Beijing
-
11:17
(UTC +08:00) - https://zz7379.github.io/
Lists (1)
Sort Name ascending (A-Z)
Stars
Software stack for loco-manipulation experiments across multiple humanoid platforms, with primary support for the Unitree G1. This repository provides whole-body control policies, a teleoperation s…
A comprehensive list of papers about Robot Manipulation, including papers, codes, and related websites.
RLinf is a flexible and scalable open-source infrastructure designed for post-training foundation models (LLMs, VLMs, VLAs) via reinforcement learning.
[Support 0.49.x](Reset Cursor AI MachineID & Bypass Higher Token Limit) Cursor Ai ,自动重置机器ID , 免费升级使用Pro功能: You've reached your trial request limit. / Too many free trial accounts used on this machi…
openvla / openvla
Forked from TRI-ML/prismatic-vlmsOpenVLA: An open-source vision-language-action model for robotic manipulation.
ROS2-Control implementations for Quadruped robots, include sim2real
ROS2 Description packages for Humanoid, Quadruped and Manipulator
Official implementation of Spatial-Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model
🌐 3D and 4D World Modeling: A Survey
Awesome lists about framework figures in papers
Official implementation of WorldForge: Unlocking Emergent 3D/4D Generation in Video Diffusion Model via Training-Free Guidance
Simulation benchmarks of GR1 Tabletop Tasks for GR00T N1
Benchmarking GR00T N1 policy in Isaac Lab
Code for Streaming 4D Visual Geometry Transformer
[arXiv 2025] GMR: General Motion Retargeting. Retarget human motions into diverse humanoid robots in real time on CPU. Retargeter for TWIST.
This is the official implementation of the paper "ConRFT: A Reinforced Fine-tuning Method for VLA Models via Consistency Policy".
A comprehensive list of excellent research papers, models, datasets, and other resources on Vision-Language-Action (VLA) models in robotics.
NVIDIA Isaac GR00T N1.6 - A Foundation Model for Generalist Robots.
CALVIN - A benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks
OpenHelix: An Open-source Dual-System VLA Model for Robotic Manipulation
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
Wan: Open and Advanced Large-Scale Video Generative Models
Official PyTorch Implementation of Unified Video Action Model (RSS 2025)
Generating Immersive, Explorable, and Interactive 3D Worlds from Words or Pixels with Hunyuan3D World Model
A generative and self-guided robotic agent that endlessly propose and master new skills.
[CVPR 2025 Best Paper Award] VGGT: Visual Geometry Grounded Transformer
Official implementation of Continuous 3D Perception Model with Persistent State