-
Nubank
- Florida
- http://www.michaelnygard.com/
Stars
- All languages
- Alloy
- Assembly
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Dockerfile
- Emacs Lisp
- Erlang
- F#
- Forth
- Go
- HCL
- HTML
- Haskell
- Java
- JavaScript
- Jupyter Notebook
- Mathematica
- Objective-C
- Objective-C++
- Open Policy Agent
- PicoLisp
- PlantUML
- PowerShell
- Python
- Racket
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Scheme
- Self
- Shell
- Svelte
- TLA
- TeX
- TypeScript
- Typst
- VHDL
- Vim Script
- Vue
- Zig
Convert documentation websites, GitHub repositories, and PDFs into Claude AI skills with automatic conflict detection
Build Real-Time Knowledge Graphs for AI Agents
Complete package of AmiBlitz3 including all sources.
🌊 The leading agent orchestration platform for Claude. Deploy intelligent multi-agent swarms, coordinate autonomous workflows, and build conversational AI systems. Features enterprise-grade archite…
Emacs major mode for the pikchr diagram markup language
See the shape of your data: point-and-click Clojure(Script) data browser
No fortress, purely open ground. OpenManus is Coming.
Automagically reverse-engineer REST APIs via capturing traffic
superglue (YC W25) builds integrations and tools from natural language. Get production-grade tools for long tail and enterprise systems.
A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
Web based diagramming app that lets you build interactive diagrams and prototypes
systems is a set of tools for describing, running and visualizing systems diagrams.
A markup-based typesetting system that is powerful and easy to learn.
A curated list of practices that we embrace at Pragmint, and links to further explore those topics.
Code to process many kinds of content by an author into an MCP server
SF3D: Stable Fast 3D Mesh Reconstruction with UV-unwrapping and Illumination Disentanglement
A human readable quasi-concatenative programming language
Open, Multi-modal Catalog for Data & AI
Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.