Stars
- All languages
- AGS Script
- ANTLR
- ASP
- Ada
- Assembly
- AutoIt
- BitBake
- Bluespec
- C
- C#
- C++
- CMake
- CSS
- Cuda
- Cython
- Emacs Lisp
- Fortran
- G-code
- Go
- HCL
- HTML
- Haskell
- Java
- JavaScript
- Jupyter Notebook
- KiCad Layout
- Kotlin
- LLVM
- Lean
- Lua
- MATLAB
- Makefile
- Markdown
- Modelica
- NASL
- Nim
- OCaml
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- PLpgSQL
- POV-Ray SDL
- Perl
- PostScript
- PowerShell
- Propeller Spin
- Python
- Ragel
- Rocq Prover
- Roff
- Ruby
- Rust
- SMT
- Scala
- Shell
- Starlark
- SystemVerilog
- Tcl
- TeX
- TypeScript
- VHDL
- Vala
- Verilog
- Vim Script
- Visual Basic
- XSLT
- Zig
- jq
🤖 The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transf…
C/C++ library to create formatted ASCII tables for console applications
C Markdown parser. Fast. SAX-like interface. Compliant to CommonMark specification.
An open continuation of the antiquated and abandoned GEDCOM standard. GEDCOM is alive!
One Warehouse for Analytics, Search, AI. Snowflake + Elasticsearch + Vector DB — rebuilt from scratch. Unified architecture on your S3.
Official Rust implementation of Apache Arrow
High-performance Rust EtherNet/IP driver for Allen-Bradley PLCs. Integration with C#, Go, and Python.
ntrdma / rdma-core
Forked from linux-rdma/rdma-coreRDMA core userspace libraries and daemons
exFAT for Linux (Backport for low kernel version support)
APFS module for linux, with experimental write support
Debugger for Sed: demystify and debug your sed scripts, from comfort of your terminal.
Control USB power on a port by port basis on some USB hubs.
Cargo credential provider that parses your .netrc file to get credentials.
New canonical home of scheduler microbenchmarking effort.
sched-ext / scx-backports
Forked from sched-ext/scxBackports of scx for older kernels
A Datacenter Scale Distributed Inference Serving Framework
Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.
A high-throughput and memory-efficient inference and serving engine for LLMs