-
https://www.tempus.com/
- Washington, DC
- @gabrielaltay
- in/gabriel-altay
Stars
Run, manage, and scale AI workloads on any AI infrastructure. Use one system to access & manage all AI compute (Kubernetes, 20+ clouds, or on-prem).
A Python implementation of the DESeq2 pipeline for bulk RNA-seq DEA.
The python library for real-time communication
Easily fine-tune, evaluate and deploy gpt-oss, Qwen3, DeepSeek-R1, or any open source LLM / VLM!
Unofficial Python client library for Semantic Scholar APIs.
aider is AI pair programming in your terminal
A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription.
Repository for StripedHyena, a state-of-the-art beyond Transformer architecture
The code used to train and run inference with the ColVision models, e.g. ColPali, ColQwen2, and ColSmol.
A scientific instrument for investigating latent spaces
Creation of interactive networks using d3 Javascript
Agent framework and applications built upon Qwen>=3.0, featuring Function Calling, MCP, Code Interpreter, RAG, Chrome extension, etc.
Qwen3 is the large language model series developed by Qwen team, Alibaba Cloud.
The official repo of Qwen (้ไนๅ้ฎ) chat & pretrained large language model proposed by Alibaba Cloud.
Training LLMs with QLoRA + FSDP
A library for generating and evaluating synthetic tabular data for privacy, fairness and data augmentation.
Low-code framework for building custom LLMs, neural networks, and other AI models
QLoRA: Efficient Finetuning of Quantized LLMs
High accuracy RAG for answering questions from scientific documents with citations
Basaran is an open-source alternative to the OpenAI text completion API. It provides a compatible streaming API for your Hugging Face Transformers-based text generation models.
A library of data loaders for LLMs made by the community -- to be used with LlamaIndex and/or LangChain
INSIGHT is an autonomous AI that can do medical research!
๐ฆ๐ The platform for reliable agents.
A playbook for systematically maximizing the performance of deep learning models.
Fast and memory-efficient exact attention