Thanks to visit codestin.com
Credit goes to lmstudio.ai

LMmy choosing

Model Catalog

New & noteworthy local models you can run on your own computer.

Model capabilities

LFM2-24B-A2B
24B
LFM2 is a family of hybrid models designed for on-device deployment. LFM2-24B-A2B is the largest model in the family, scaling the architecture to 24 billion parameters while keeping inference efficient.
19.2K
3
Updated 6 days ago
Qwen3-Coder-Next
80B
Qwen3 Coder Next is an 80B MoE with 3B active parameters designed for coding agents and local development. Excels at long-horizon reasoning, complex tool usage, and recovery from execution failures.
125.2K
34
Updated 24 days ago
GLM-4.7
30B
Open source coding models by Z.ai, based on a new base model and specializing in coding and tool calling.
190.9K
57
Updated 1 month ago
FunctionGemma
270M
FunctionGemma is a lightweight, open model from Google, built as a foundation for creating your own specialized function calling models.
3.3K
35
Updated 2 months ago
Nemotron 3
30B
General purpose reasoning and chat model trained from scratch by NVIDIA. Contains 30B total parameters with only 3.5B active at a time for low-latency MoE inference
132.3K
44
Updated 2 months ago
GLM-4.6V-Flash
9B
GLM 4.6V Flash is a 9B vision-language model optimized for local deployment and low-latency applications.
244.3K
43
Updated 2 months ago
Devstral 2
24B
123B
Second-generation Devstral for agentic coding. Built for tool use to explore codebases, edit multiple files, and power software engineering agents with newly added vision support.
160.1K
45
2
Updated 2 months ago
Rnj-1
8B
Rnj-1 is a family of 8B parameter open-weight, dense models trained from scratch by Essential AI.
48.6K
15
Updated 2 months ago
Ministral 3
3B
3B
8B
8B
14B
14B
Ministral 3 series, available in three model sizes: 3B, 8B, and 14B parameters. Provides best of class cost-to-performance ratio.
509.3K
84
6
Updated 3 months ago
Qwen3 Next
80B
Hybrid attention architecture, high-sparsity Mixture-of-Experts 80B model (active 3B).
62K
25
Updated 3 months ago
Olmo 3
7B
7B
32B
Olmo 3 is a family of Open language models designed to enable the science of language models.
36.7K
28
3
Updated 3 months ago
olmOCR 2
7B
The olmOCR 2 model is a Vision Language Model (VLM) from Allen AI.
60.2K
14
Updated 3 months ago
minimax-m2
230B
MiniMax M2 is a 230B MoE (10B active) model built for coding and agentic workflows
21.6K
29
Updated 3 months ago
gpt-oss-safeguard
20B
120B
gpt-oss-safeguard-20b and gpt-oss-safeguard-120b are open safety models from OpenAI, building on gpt-oss. Trained to help classify text content based on customizable policies.
7.8K
33
2
Updated 4 months ago
Qwen3-VL
2B
4B
8B
30B
32B
Qwen's latest vision-language model. Includes comprehensive upgrades to visual perception, spatial reasoning, and image understanding.
713.9K
104
5
Updated 4 months ago
Granite 4.0
3B
3B
7B
32B
Granite 4.0 language models are lightweight, state-of-the-art open models that natively support multilingual capabilities, coding tasks, RAG, tool use, and JSON output.
59.5K
48
4
Updated 4 months ago
seed-oss
36B
Advanced reasoning model from ByteDance with flexible "thinking budget" control and ability to reflect on the length of its own reasoning
46.5K
22
Updated 4 months ago
Qwen3
4B
4B
30B
30B
235B
235B
The latest version of the Qwen3 model family, featuring 4B, 30B, and 235B dense and MoE models, both thinking and non-thinking variants.
425.5K
147
6
Updated 4 months ago
gpt-oss
20B
120B
OpenAI's first open source LLM. Comes in 2 sizes: 20B and 120B. Supports configurable reasoning effort (low, medium, high). Trained for tool use. Apache 2.0 licensed.
1.6M
290
2
Updated 4 months ago
Qwen3-Coder
30B
480B
State-of-the-art, Mixture-of-Experts local coding model with native support for 256K context length. Available in 30B (3B active) and 480B (35B active) sizes.
308.1K
121
2
Updated 4 months ago
Ernie-4.5
21B
Medium-size Mixture-of-Experts model from Baidu's new Ernie 4.5 line of foundation models.
16.7K
11
Updated 4 months ago
LFM2
350M
700M
1.2B
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on-device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
60.8K
42
3
Updated 4 months ago
Devstral
23.6B
24B
Devstral is a coding model from Mistral AI. It excels at using tools to explore codebases, editing multiple files and power software engineering agents.
74.9K
38
2
Updated 2 months ago
gemma-3n
4.5B
6.9B
Gemma 3n is a generative AI model optimized for use in everyday devices, such as phones, laptops, and tablets.
205.3K
83
2
Updated 4 months ago
Mistral Small
24B
Mistrall Small is a 'knowledge-dense' 24B multi-modal (image input) local model that supports up to 128 token context length.
73K
19
Updated 4 months ago
Magistral
23.6B
24B
MistralAI's open-weight reasoning model. 24B dense transformer model supporting up to 128K token context window. The model is capable of long chains of reasoning traces before providing answers.
127.9K
49
2
Updated 4 months ago
mistral-nemo
12B
General purpose dense transformer designed for multilingual use cases. Built in collaboration between MistralAI and NVIDIA.
35.6K
7
Updated 4 months ago
qwen2.5-vl
3B
7B
32B
72B
Qwen2.5-VL is a performant vision-language model, capable of recognizing common objects and text. Supports context length of 128k tokens in a variety of human languages.
118.7K
24
4
Updated 4 months ago
gemma-3
270M
1B
4B
12B
27B
State-of-the-art image + text input models from Google, built from the same research and tech used to create the Gemini models
1.4M
169
5
Updated 4 months ago
phi-4-reasoning
3.8B
14.7B
14.7B
Phi-4-mini-reasoning is a lightweight open model built upon synthetic data with a focus on high-quality, reasoning dense data.
138.6K
41
3
Updated 4 months ago
phi-4
3B
14B
phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets.
28.5K
14
2
Updated 4 months ago
Codestral
22B
Mistral AI's latest coding model, Codestral can handle both instructions and code completions with ease in over 80 programming languages.
44.2K
26
Updated 4 months ago
Mistral
7B
One of the most popular open-source LLMs, Mistral's 7B Instruct model's balance of speed, size, and performance makes it a great general-purpose daily driver.
108.5K
39
Updated 4 months ago
Qwen3 (1st Generation)
4B
8B
14B
30B
32B
235B
The first batch of Qwen3 models (Qwen3-2504), a collection of dense and MoE models ranging from 4B to 235B. These are general purpose models that score highly on benchmarks.
467.7K
44
6
Updated 4 months ago
deepseek-r1
7B
8B
8B
14B
32B
70B
Distilled version of the DeepSeek-R1-0528 model, created by continuing the post-training process on the Qwen3 8B Base model using Chain-of-Thought (CoT) from DeepSeek-R1-0528.
663.6K
161
6
Updated 4 months ago