Stars
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use thโฆ
An open source approach to locally record and enable searching everything you view on your Mac.
A Python script that automatically checks in to your Southwest flight 24 hours beforehand.
Accessible large language models via k-bit quantization for PyTorch.
mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
A framework to evaluate llm tasks
QLoRA: Efficient Finetuning of Quantized LLMs
Fast Open-Source Search & Clustering engine ร for Vectors & Arbitrary Objects ร in C++, C, Python, JavaScript, Rust, Java, Objective-C, Swift, C#, GoLang, and Wolfram ๐
Hurl, run and test HTTP requests with plain text.
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
CodeTF: One-stop Transformer Library for State-of-the-art Code LLM
A guidance language for controlling large language models.
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
ChatRWKV is like ChatGPT but powered by RWKV (100% RNN) language model, and open source.
antimatter15 / alpaca.cpp
Forked from ggml-org/llama.cppLocally run an Instruction-Tuned Chat-Style LLM
Original Implementation of Prompt Tuning from Lester, et al, 2021
Long Range Arena for Benchmarking Efficient Transformers
A concrete syntax tree parser and serializer library for Python that preserves many aspects of Python's abstract syntax tree
Pythonic AI generation of images and videos
An extremely fast Python linter and code formatter, written in Rust.
The simplest, fastest repository for training/finetuning medium-sized GPTs.