Thanks to visit codestin.com
Credit goes to github.com

Skip to content

hodu-rs/hodu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Hodu, a user-friendly ML framework built in Rust.

Crates.io Doc.rs License

Hodu (ํ˜ธ๋‘) is a Korean word meaning "walnut".

About Hodu

Hodu Avatar

Hodu is a machine learning library built with user convenience at its core, designed for both rapid prototyping and seamless production deploymentโ€”including embedded environments.

Core Differentiators

Built on Rust's foundation of memory safety and zero-cost abstractions, Hodu offers unique advantages:

  • Hybrid Execution Model: Seamlessly switch between dynamic execution for rapid prototyping and static computation graphs for optimized production deployment
  • Memory Safety by Design: Leverage Rust's ownership system to eliminate common ML deployment issues like memory leaks and data races
  • Embedded-First Architecture: Full no_std support enables ML inference on microcontrollers and resource-constrained devices
  • Zero-Cost Abstractions: High-level APIs that compile down to efficient machine code without runtime overhead

Dual Backend Architecture

  • HODU Backend: Pure Rust implementation with no_std support for embedded environments
    • CPU operations with SIMD optimization
    • CUDA GPU acceleration (with cuda feature)
    • Metal GPU support for macOS (with metal feature)
  • XLA Backend: JIT compilation via OpenXLA/PJRT (requires std)
    • Advanced graph-level optimizations
    • CPU and CUDA device support
    • Production-grade performance for static computation graphs

Warning

This is a personal learning and development project. As such:

  • The framework is under active development
  • Features may be experimental or incomplete
  • Functionality is not guaranteed for production use

It is recommended to use the latest version.

Caution

Current Development Status:

  • CUDA GPU support is not yet fully implemented and is under active development
  • SIMD optimizations are not yet implemented and are under active development

Get started

Here are some examples that demonstrate matrix multiplication using both dynamic execution and static computation graphs.

Dynamic Execution

This example shows direct tensor operations that are executed immediately:

use hodu::prelude::*;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Set the runtime device (CPU, CUDA, Metal)
    set_runtime_device(Device::CPU);

    // Create random tensors
    let a = Tensor::randn(&[2, 3], 0f32, 1.)?;
    let b = Tensor::randn(&[3, 4], 0f32, 1.)?;

    // Matrix multiplication
    let c = a.matmul(&b)?;

    println!("{}", c);
    println!("{:?}", c);

    Ok(())
}

With the cuda feature enabled, you can use CUDA in dynamic execution with the following setting:

- set_runtime_device(Device::CPU);
+ set_runtime_device(Device::CUDA(0));

Static Computation Graphs

For more complex workflows or when you need reusable computation graphs, you can use the Builder pattern:

use hodu::prelude::*;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    // Create a new computation graph
    let builder = Builder::new("matmul_example".to_string());
    builder.start()?;

    // Define input placeholders
    let a = Tensor::input("a", &[2, 3])?;
    let b = Tensor::input("b", &[3, 4])?;

    // Define computation
    let c = a.matmul(&b)?;

    // Mark output
    builder.add_output("result", c)?;
    builder.end()?;

    // Build and execute script
    let mut script = builder.build()?;

    // Provide actual data
    let a_data = Tensor::randn(&[2, 3], 0f32, 1.)?;
    let b_data = Tensor::randn(&[3, 4], 0f32, 1.)?;
    script.add_input("a", a_data);
    script.add_input("b", b_data);

    // Execute and get results
    let output = script.run()?;
    println!("{}", output["result"]);
    println!("{:?}", output["result"]);

    Ok(())
}

With the cuda feature enabled, you can use CUDA in static computation graphs with the following setting:

let mut script = builder.build()?;
+ script.set_device(Device::CUDA(0));

With the xla feature enabled, you can use XLA in static computation graphs with the following setting:

let mut script = builder.build()?;
+ script.set_backend(Backend::XLA);

Features

Default Features

Feature Description Dependencies
std Standard library support -
serde Serialization/deserialization support -

Optional Features

Feature Description Dependencies Required Features
cuda NVIDIA CUDA GPU support CUDA toolkit -
metal Apple Metal GPU support Metal framework std
xla Google XLA compiler backend XLA libraries std

XLA Feature Requirements

Building with the xla feature requires:

  • LLVM and Clang installed on your system
  • RAM: 8GB+ free memory
  • Disk Space: 20GB+ free storage

Supported Platforms

Standard Environments

Target Triple Backend Device Features Status
x86_64-unknown-linux-gnu HODU CPU std โœ… Stable
HODU CUDA std, cuda ๐Ÿšง In Development
XLA CPU std, xla โœ… Stable
XLA CUDA std, xla, cuda ๐Ÿšง In Development
aarch64-unknown-linux-gnu HODU CPU std โœ… Stable
XLA CPU std, xla โœ… Stable
x86_64-apple-darwin HODU CPU std ๐Ÿงช Experimental
XLA CPU std, xla ๐Ÿšง In Development
aarch64-apple-darwin HODU CPU std โœ… Stable
HODU Metal std, metal ๐Ÿงช Experimental
XLA CPU std, xla โœ… Stable
x86_64-pc-windows-msvc HODU CPU std โœ… Stable
HODU CUDA std, cuda ๐Ÿšง In Development
XLA CPU std, xla ๐Ÿšง In Development
XLA CUDA std, xla, cuda ๐Ÿšง In Development

Embedded Environments

๐Ÿงช Experimental: Embedded platforms (ARM Cortex-M, RISC-V, Embedded Linux) are supported via no_std feature but are experimental and not extensively tested in production environments.

Docs

CHANGELOG - Project changelog and version history

TODOS - Planned features and improvements

Guide

Inspired by

Hodu draws inspiration from the following amazing projects:

  • maidenx - The predecessor project to Hodu
  • candle - Minimalist ML framework for Rust
  • GoMlx - An Accelerated Machine Learning Framework For Go

Credits

Hodu Character Design: Created by Eira

About

A user-friendly ML framework built in Rust for rapid prototyping and embedded deployment

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Languages