- Bangalore, India
-
16:50
(UTC +05:30) - @sujay_shahare
- in/sujay-shahare
- https://sujayshahare.com
Stars
A python module to repair invalid JSON from LLMs
Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards.
A Python library for extracting structured information from unstructured text using LLMs with precise source grounding and interactive visualization.
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
Implementation of Nougat Neural Optical Understanding for Academic Documents
Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
Official Code of Memento: Fine-tuning LLM Agents without Fine-tuning LLMs
Build Real-Time Knowledge Graphs for AI Agents
The official repo for “Dolphin: Document Image Parsing via Heterogeneous Anchor Prompting”, ACL, 2025.
Context engineering is the new vibe coding - it's the way to actually make AI coding assistants work. Claude Code is the best for this so that's what this repo is centered around, but you can apply…
Beta release of Archon OS - the knowledge and task management backbone for AI coding assistants.
A powerful coding agent toolkit providing semantic retrieval and editing capabilities (MCP server & other integrations)
Claude Code is an agentic coding tool that lives in your terminal, understands your codebase, and helps you code faster by executing routine tasks, explaining complex code, and handling git workflo…
A collection of notebooks/recipes showcasing some fun and effective ways of using Claude.
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
The AI Browser Automation Framework
A configuration framework that enhances Claude Code with specialized commands, cognitive personas, and development methodologies.
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verified research papers.
Implementation of self-certainty as an extention of ZeroEval Project
A list of microgrant programs for your good ideas
Code for the paper: "Learning to Reason without External Rewards"
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
Fully open data curation for reasoning models
A library for mechanistic interpretability of GPT-style language models
Lightweight coding agent that runs in your terminal
An open protocol enabling communication and interoperability between opaque agentic applications.