-
中山大学
- 珠海
-
17:26
(UTC +08:00) - http://moonknight.cn/MyPage
Highlights
- Pro
Stars
TextOp: Real-time Interactive Text-Driven Humanoid Robot Motion Generation and Control
MultiModalWBC is a fully open-source, IsaacLab-based framework for multi-modal whole-body control, designed for motion imitation, motion tracking, and task-conditioned control in legged robots. The…
Official implementation of OpenTrack.
Implementation of One-step Latent-free Image Generation with Pixel Mean Flows (Lu et al., 2026)
Codebase for SIGGRAPH 2023 Paper: Simulation and Retargeting of Complex Multi-Character Interactions
Bringing Characters to Life with Computer Brains in Unity
Code for ICLR 2024 paper "Duolando: Follower GPT with Off-Policy Reinforcement Learning for Dance Accompaniment"
Efficient Learning on Point Clouds with Basis Point Sets
Official Implementation of the Paper: Object Motion Guided Human Motion Synthesis (SIGGRAPH Asia 2023 (TOG))
PyTorch implementation of Pointnet2/Pointnet++
Python package for rendering 3D scenes and animations using blender.
Rokoko Studio Live plugin for Blender
Add-on for Blender, provides importing robots from URDF format
[ICCV 2023] Official PyTorch implementation of the paper "InterDiff: Generating 3D Human-Object Interactions with Physics-Informed Diffusion"
Official Code for CVPR2025 Paper: LatentHOI: On the Generalizable Hand Object Motion Generation with Latent Hand Diffusion
Text2HOI: Text-guided 3D Motion Generation for Hand-Object Interaction
This repository collects papers on Human-Interaction-Motion-Generation applications. We will update new papers irregularly.
HY-Motion model for 3D human motion or 3D character animation generation.
Expressive Body Capture: 3D Hands, Face, and Body from a Single Image
Official repository of "CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object REarrangement".
Code release for the paper "DartControl: A Diffusion-Based Autoregressive Motion Model for Real-Time Text-Driven Motion Control"
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
[ ECCV 2024 ] MotionLCM: This repo is the official implementation of "MotionLCM: Real-time Controllable Motion Generation via Latent Consistency Model"
A system for generating diverse, physically compliant 3D human motions across multiple motion types, guided by plot contexts to streamline creative workflows in anime and game design.