Thanks to visit codestin.com
Credit goes to lib.rs

#deep-learning #neural-network #machine-learning #autograd

no-std torsh

A blazingly fast, production-ready deep learning framework written in pure Rust

2 releases

0.1.0-alpha.2 Dec 22, 2025
0.1.0-alpha.1 Sep 30, 2025
0.0.0 Jun 30, 2025

#2151 in Machine learning

MIT/Apache

15MB
329K SLoC

torsh

The main crate for ToRSh - A blazingly fast, production-ready deep learning framework written in pure Rust.

Overview

This is the primary entry point for the ToRSh framework, providing convenient access to all functionality through a unified API.

Installation

Add to your Cargo.toml:

[dependencies]
torsh = "0.1.0-alpha.2"

Quick Start

use torsh::prelude::*;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create tensors
    let x = tensor![[1.0, 2.0], [3.0, 4.0]];
    let y = tensor![[5.0, 6.0], [7.0, 8.0]];
    
    // Matrix multiplication
    let z = x.matmul(&y)?;
    println!("Result: {:?}", z);
    
    // Automatic differentiation
    let a = tensor![2.0].requires_grad_(true);
    let b = a.pow(2.0)? + a * 3.0;
    b.backward()?;
    println!("Gradient: {:?}", a.grad()?);
    
    Ok(())
}

Available Features

  • default: Includes std, nn, optim, and data
  • std: Standard library support (enabled by default)
  • nn: Neural network modules
  • optim: Optimization algorithms
  • data: Data loading utilities
  • cuda: CUDA backend support
  • wgpu: WebGPU backend support
  • metal: Metal backend support (Apple Silicon)
  • serialize: Serialization support
  • full: All features

Module Structure

The crate re-exports functionality from specialized sub-crates:

  • Core (torsh::core): Basic types and traits
  • Tensor (torsh::tensor): Tensor operations
  • Autograd (torsh::autograd): Automatic differentiation
  • NN (torsh::nn): Neural network layers
  • Optim (torsh::optim): Optimizers
  • Data (torsh::data): Data loading

F Namespace

Similar to PyTorch's torch.nn.functional, ToRSh provides functional operations in the F namespace:

use torsh::F;

let output = F::relu(&input);
let output = F::softmax(&logits, -1)?;

License

Licensed under either of

at your option.

Dependencies

~87–145MB
~2.5M SLoC