Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Modern desktop application (Rust + Tauri v2 + Svelte 5 + Candle (HF)) for communicating with AI models that runs completely locally on your computer. No subscriptions, no data sent to the internet β€” just you and your personal AI assistant

License

Notifications You must be signed in to change notification settings

FerrisMind/Oxide-Lab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

English Русский PortuguΓͺs


Oxide Lab Logo

Private AI chat desktop application with local LLM support.
All inference happens on your machine β€” no cloud, no data sharing.

GitHub Stars Awesome Tauri Awesome Svelte

Oxide Lab Chat Interface

πŸ“š Table of Contents

✨ What is this?

Oxide Lab is a native desktop application for running large language models locally. Built with Rust and Tauri v2, it provides a fast, private chat interface without requiring internet connectivity or external API services.

🎬 Demo

dem1.mp4
dem2.mp4
dem3.mp4

πŸš€ Key Features

  • 100% local inference β€” your data never leaves your machine
  • Multi-architecture support: Llama, Qwen2, Qwen2.5, Qwen3, Qwen3 MoE, Mistral, Mixtral, DeepSeek, Yi, SmolLM2
  • GGUF and SafeTensors model formats
  • Hardware acceleration: CPU, CUDA (NVIDIA), Metal (Apple Silicon), Intel MKL, Apple Accelerate
  • Streaming text generation
  • Multi-language UI: English, Russian, Brazilian Portuguese
  • Modern interface built with Svelte 5 and Tailwind CSS

πŸ› οΈ Installation & Setup

Prerequisites

  • Node.js (for frontend build)
  • Rust toolchain (for backend)
  • For CUDA: NVIDIA GPU with CUDA toolkit
  • For Metal: macOS with Apple Silicon

Development

# Install dependencies
npm install

# Run with CPU backend
npm run tauri:dev:cpu

# Run with CUDA backend (NVIDIA GPU)
npm run tauri:dev:cuda

# Platform-aware development
npm run app:dev

Build

# Build with CPU backend
npm run tauri:build:cpu

# Build with CUDA backend
npm run tauri:build:cuda

Quality Checks

npm run lint          # ESLint
npm run lint:fix      # ESLint with auto-fix
npm run check         # Svelte type checking
npm run format        # Prettier formatting
npm run test          # Vitest tests

Rust-specific (from src-tauri/)

cargo clippy          # Linting
cargo test            # Unit tests
cargo audit           # Security audit

πŸ“– How to Start Using

  1. Build or download the application
  2. Download a compatible GGUF or SafeTensors model (e.g., from Hugging Face)
  3. Launch Oxide Lab
  4. Load your model through the interface
  5. Start chatting

πŸ–₯️ System Requirements

  • Windows, macOS, or Linux
  • Minimum 8 GB RAM (16+ GB recommended for larger models)
  • For GPU acceleration:
    • NVIDIA: CUDA-compatible GPU
    • Apple: M1/M2/M3 chip (Metal)
    • Intel: CPU with MKL support

πŸ€– Supported Models

Architectures with full support:

  • Llama (1, 2, 3, 4), Mistral, Mixtral, DeepSeek, Yi, SmolLM2, CodeLlama
  • Qwen2, Qwen2.5, Qwen2 MoE
  • Qwen3, Qwen3 MoE

Formats:

  • GGUF (quantized models)
  • SafeTensors

πŸ›‘οΈ Privacy and Security

  • All processing happens locally on your device
  • No telemetry or data collection
  • No internet connection required for inference
  • Content Security Policy (CSP) enforced

πŸ™ Acknowledgments

This project is built on top of excellent open-source work:

  • Candle β€” ML framework for Rust (HuggingFace)
  • Tauri β€” Desktop application framework
  • Svelte β€” Frontend framework
  • Tokenizers β€” Fast tokenization (HuggingFace)

See THIRD_PARTY_LICENSES.md for full dependency attribution.

πŸ“„ License

Apache-2.0 β€” see LICENSE

Copyright (c) 2025 FerrisMind

About

Modern desktop application (Rust + Tauri v2 + Svelte 5 + Candle (HF)) for communicating with AI models that runs completely locally on your computer. No subscriptions, no data sent to the internet β€” just you and your personal AI assistant

Topics

Resources

License

Stars

Watchers

Forks