- London
-
07:54
(UTC) - https://imfing.com
- in/fing
- @_imfing
Highlights
Starred repositories
Modern Python client for Apache Solr with type safety and async support, built with httpx and Pydantic
agent-sandbox enables easy management of isolated, stateful, singleton workloads, ideal for use cases like AI agent runtimes.
Financial Services Interest Group
vlrosa-dev / hextra
Forked from imfing/hextrašÆ Modern, batteries-included Hugo theme for creating beautiful doc, blog and static websites
gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI
The glamourous AI coding agent for your favourite terminal š
Extremely fast Query Engine for DataFrames, written in Rust
OCaml implementation of word embedding algorithms with minimal dependencies
TinyGo drivers for sensors, displays, wireless adaptors, and other devices that use I2C, SPI, GPIO, ADC, and UART interfaces.
Specification andĀ documentation for the Model Context Protocol
A lightweight, powerful framework for multi-agent workflows
Minimal reproduction of DeepSeek R1-Zero
[AAAI 2026] Global Compression Commander: Plug-and-Play Inference Acceleration for High-Resolution Large Vision-Language Models
An LLM-powered knowledge curation system that researches a topic and generates a full-length report with citations.
š» Ghostty is a fast, feature-rich, and cross-platform terminal emulator that uses platform-native UI and GPU acceleration.
GenAI Agent Framework, the Pydantic way
Visualise your CSV files in seconds without sending your data anywhere
wazero: the zero dependency WebAssembly runtime for Go developers
Go compiler for small places. Microcontrollers, WebAssembly (WASM/WASI), and command-line tools. Based on LLVM.
DSPy: The framework for programmingānot promptingālanguage models
Instant is a modern Firebase. We make you productive by giving your frontend a real-time database.
A high-throughput and memory-efficient inference and serving engine for LLMs
High-performance In-browser LLM Inference Engine