-
Intel Corporation
- San Francisco Bay Area
- http://www.intel.com
Lists (4)
Sort Name ascending (A-Z)
Stars
ToolOrchestra is an end-to-end RL training framework for orchestrating tools and agentic workflows.
TurboDiffusion: 100–200× Acceleration for Video Diffusion Models
LLaDA2.0 is the diffusion language model series developed by InclusionAI team, Ant Group.
Helpful kernel tutorials and examples for tile-based GPU programming
The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
Miles is an enterprise-facing reinforcement learning framework for LLM and VLM post-training, forked from and co-evolving with slime.
LLM Council works together to answer your hardest questions
SGLang is a high-performance serving framework for large language models and multimodal models.
Simple framework for training and evaluating math reasoning agents using local models, GRPO and vLLM.
[NeurIPS'25] dKV-Cache: The Cache for Diffusion Language Models
A collection of tricks and tools to speed up transformer models
A toolkit for developing and comparing reinforcement learning algorithms.
Open diffusion language model for code generation — releasing pretraining, evaluation, inference, and checkpoints.
Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels
Example how to use SmolAgent with the diffusion based Inception Model
MMaDA - Open-Sourced Multimodal Large Diffusion Language Models (dLLMs with mixed-CoT and unified GRPO)
Oneflow-Inc / xDiT-flux-fp8
Forked from xdit-project/xDiTxDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) on multi-GPU Clusters
Collection of awesome LLM apps with AI Agents and RAG using OpenAI, Anthropic, Gemini and opensource models.
The official GitHub repository for the Mathematics of Machine Learning book!
Intel® AI for Enterprise Inference optimizes AI inference services on Intel hardware using Kubernetes Orchestration. It automates LLM model deployment for faster inference, resource provisioning, a…
Mirage Persistent Kernel: Compiling LLMs into a MegaKernel
An interface library for RL post training with environments.
TorchGeo: datasets, samplers, transforms, and pre-trained models for geospatial data
Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3, Qwen3-MoE, DeepSeek-R1, GLM4.5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, Qwen3-Omni, InternVL3.5, Ovis2.5, GLM4.5v, Llava, …
Open Source Continuous Inference Benchmarking - GB200 NVL72 vs MI355X vs B200 vs GB300 NVL72 vs H100 & soon™ TPUv6e/v7/Trainium2/3- DeepSeek 670B MoE, GPTOSS