Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View altar31's full-sized avatar
  • France
  • 11:57 (UTC +01:00)

Block or report altar31

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
altar31/README.md
DeepDream DeepDream: Do neural networks dream ? [Inceptionism, Google Research 2015]

πŸ“« Reach me

About me πŸ§”β€β™‚οΈ

  • PhD in Physics and former energy engineer
  • Amoung other things, i'm interested in solving complex problems, (low-power) AI models, LLM inference & serving engine and software engineering.
  • I'm a Nix & NixOS enthusiast
  • I'm a bike practitioner, jogger and traveler
  • I have an amazing wife and two cats

πŸ‘¨β€πŸ”¬ Science

My main research interests are :

  • Deep learning algorithms
  • Large Language Models (LLM), inference and serving engine
  • Domain Specific Compiler
  • Scientific Computing and numerical modeling

Damien's github stats

Visitors count



Pinned Loading

  1. nixpkgs nixpkgs Public

    Forked from NixOS/nixpkgs

    Nix Packages collection & NixOS

    Nix

  2. burn burn Public

    Forked from tracel-ai/burn

    Burn is a next generation Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.

    Rust

  3. polars polars Public

    Forked from pola-rs/polars

    Extremely fast Query Engine for DataFrames, written in Rust

    Rust

  4. tauri tauri Public

    Forked from tauri-apps/tauri

    Build smaller, faster, and more secure desktop and mobile applications with a web frontend.

    Rust

  5. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python