Stars
Replication of the paper "Text Is All You Need: Learning Language Representations for Sequential Recommendation" on KDD'23.
Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek-R1, Qwen3, Gemma 3, TTS 2x faster with 70% less VRAM.
Agentic components of the Llama Stack APIs
This repository showcases various advanced techniques for Retrieval-Augmented Generation (RAG) systems. RAG systems combine information retrieval with generative models to provide accurate and cont…
MTEB: Massive Text Embedding Benchmark
Universal LLM Deployment Engine with ML Compilation
Retrieval and Retrieval-augmented LLMs
RankLLM is a Python toolkit for reproducible information retrieval research using rerankers, with a focus on listwise reranking.
Retrieval Augmented Generation (RAG) framework and context engine powered by Pinecone
Code and documentation to train Stanford's Alpaca models, and generate the data.
Interact, analyze and structure massive text, image, embedding, audio and video datasets
streamlit-shap provides a wrapper to display SHAP plots in Streamlit.
🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.
Code for the paper "Query-Key Normalization for Transformers"
A Unified Library for Parameter-Efficient and Modular Transfer Learning
SPECTER: Document-level Representation Learning using Citation-informed Transformers
Conditional Transformer Language Model for Controllable Generation