Lists (1)
Sort Name ascending (A-Z)
Stars
A curated list of state-of-the-art research in embodied AI, focusing on vision-language-action (VLA) models, vision-language navigation (VLN), and related multimodal learning approaches.
Official code for "FeatUp: A Model-Agnostic Frameworkfor Features at Any Resolution" ICLR 2024
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use th…
Curated list of papers and resources focused on 3D Gaussian Splatting, intended to keep pace with the anticipated surge of research in the coming months.
[CVPR 2024 Highlight] Official PyTorch implementation of SpatialTracker: Tracking Any 2D Pixels in 3D Space
An open-source impl. of Large Reconstruction Models
Python code to fuse multiple RGB-D images into a TSDF voxel volume.
Computer vision utils for Blender (generate instance annoatation, depth and 6D pose by one line code)
[ICLR 2024 Oral] Generative Gaussian Splatting for Efficient 3D Content Creation
Official Open Source code for "Masked Autoencoders As Spatiotemporal Learners"
The first open Federated Learning framework implemented in C++ and Python.
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) fo…
PHYRE is a benchmark for physical reasoning.
This repo is a PyTorch implementation for Paper "MovingParts: Motion-based Part Discovery in Dynamic Radiance Field"
Code for "AutoRecon: Automated 3D Object Discovery and Reconstruction" CVPR 2023 (Highlight)
Google Research
Personalize Segment Anything Model (SAM) with 1 shot in 10 seconds
DS-SLAM: Real-Time Dynamic SLAM Using Semantic Segmentation Methods
DynaSLAM is a SLAM system robust in dynamic environments for monocular, stereo and RGB-D setups
PyBullet keyboard shortcut/ hotkeys list
Official JAX Implementation of Monocular Dynamic View Synthesis: A Reality Check (NeurIPS 2022)
PyTorch code and models for the DINOv2 self-supervised learning method.