-
ajou university
- suwon, KR
-
16:45
(UTC +09:00) - in/hankyul-kang-85825b152
Highlights
- Pro
Lists (18)
Sort Name ascending (A-Z)
🎭DataAugmentation
Data Augmentation🤪Diffusion
Diffusionembodied AI
🔔ImageClassification
ImageClassification Task🍥ImageSegmentation
Image Segmentation TaskLLM
Large-Language-Model📊Long-tailed & Continual
Long-tailed & Continual Learning🐥Medical
Medical Vision🐾ObjectDetection
Object Detection🐐Pruning
🖥️quantum computing
quantum computing👻SelfSupervisedLearning
Self supervised learning🕶️semi supervised & d.a
semi supervised & domain adaptation😎something funny
something funny & cool idea🎃Tools & Framework
Usefull tools & Framework🤔Uncertainty Learning
uncertainty learning😶🌫️ VAE
Variational Auto EncoderVLM
vision & languageStars
Official code repository for "Understanding the Performance Behaviors of End-to-End Protein Design Pipelines on GPUs [IEEE CAL 25]"
Code repo for the paper "SpinQuant LLM quantization with learned rotations"
BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.
An official implementation of "Scheduling Weight Transitions for Quantization-Aware Training" (ICCV 2025) in PyTorch.
[WACV'26] ForestSplats: Deformable transient field for Gaussian Splatting in the Wild
PyTorch implementation of quantization-aware matrix factorization (QMF) for data compression
CUDA Templates and Python DSLs for High-Performance Linear Algebra
Activation-aware Singular Value Decomposition for Compressing Large Language Models
[ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"
PB-LLM: Partially Binarized Large Language Models
Official code repository for "Pimba: A Processing-in-Memory Acceleration for Post-Transformer Large Language Model Serving [MICRO'25]"
Implementation of LPLR algorithm for matrix compression
Less is Enough: Training-Free Video Diffusion Acceleration via Runtime-Adaptive Caching
PyTorch Code for Energy-Based Transformers paper -- generalizable reasoning and scalable learning
Official code repository for "Déjà Vu: Efficient Video-Language Query Engine with Learning-based Inter-Frame Computation Reuse [VLDB 25]"
A python library for self-supervised learning on images.
[ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models
[ICLR 2025] Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
[LPCVC2025] Official PyTorch implementation of the 2025 IEEE Low-Power Computer Vision Challenge Track1 Winner at the CVPR 2025 Workshop.
[EMNLP 2024] RoLoRA: Fine-tuning Rotated Outlier-free LLMs for Effective Weight-Activation Quantization
[ICML 2024 Oral] This project is the official implementation of our Accurate LoRA-Finetuning Quantization of LLMs via Information Retention
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.