Starred repositories
diffusion generative model
[NeurIPS 2022] Official code repository for "TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition"
Official code for "RealFusion: 360° Reconstruction of Any Object from a Single Image" (CVPR 2023)
This repository contains code for the paper 'Texture Fields: Learning Texture Representations in Function Space'.
The implementation of "In-Place Scene Labelling and Understanding with Implicit Scene Representation" [ICCV 2021].
Official implementation of "Let 2D Diffusion Model Know 3D-Consistency for Robust Text-to-3D Generation"
Official Implementation for "Latent-NeRF for Shape-Guided Generation of 3D Shapes and Textures"
Latent Point Diffusion Models for 3D Shape Generation
Text-to-3D & Image-to-3D & Mesh Exportation with NeRF + Diffusion.
Car parts dataset for object detection and semantic segmentation tasks, provided by DSMLR lab, IT-KMITL.
Stable Diffusion web UI
[CVPR 2021] Multi-Stage Progressive Image Restoration. SOTA results for Image deblurring, deraining, and denoising.
A curated list of awesome 3d generation papers
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
[ICCV2019] Pixel2Mesh++: Multi-View 3D Mesh Generation via Deformation.
This repository contains implementations and illustrative code to accompany DeepMind publications
Cross-platform, customizable ML solutions for live and streaming media.
VQGAN+CLIP Colab Notebook with user-friendly interface.
3D mesh stylization driven by a text input in PyTorch
[ICCV 2023] PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
[CVPR 2022] PointCLIP: Point Cloud Understanding by CLIP
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
OpenMMLab Detection Toolbox and Benchmark
Language-Driven Semantic Segmentation
Official implementation of CLIP-Mesh: Generating textured meshes from text using pretrained image-text models
Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).