Stars
Music Client for macOS. Use WebKit interoperability with JS-injection to add custom scripting and styling to your music experience.
[ICLR2025, ICML2025, NeurIPS2025 Spotlight] Quantized Attention achieves speedup of 2-5x compared to FlashAttention, without losing end-to-end metrics across language, image, and video models.
Fast and memory-efficient exact attention
Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction
Master the command line, in one page
Transformer-based protein function Annotation with joint feature-Label Embedding
[ICLR 2022] OntoProtein: Protein Pretraining With Gene Ontology Embedding
Awesome Protein Representation Learning
Code for generating model and predictions for CAFA5 competition 2023 (4th place solution)
YuE: Open Full-song Music Generation Foundation Model, something similar to Suno.ai but open
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
ProtMamba: a homology-aware but alignment-free protein state space model
Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Models)
Scalable Protein Language Model Finetuning with Distributed Learning and Advanced Training Techniques such as LoRA.
[ICML-23 ORAL] ProtST: Multi-Modality Learning of Protein Sequences and Biomedical Texts
Evolutionary Scale Modeling (esm): Pretrained language models for proteins
[NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Measuring Massive Multitask Language Understanding | ICLR 2021
本项目采用Firefly模型训练框架,使用LLAMA-2模型对多项选择阅读理解任务(Multiple Choice MRC)进行微调,取得了显著的进步。
Get up and running with OpenAI GLM-4.7, DeepSeek, gpt-oss, Qwen, Gemma and other models.