Thanks to visit codestin.com
Credit goes to github.com

Skip to content
#

agent-evaluation

Here are 21 public repositories matching this topic...

ai-agents-reality-check

Mathematical benchmark exposing the massive performance gap between real agents and LLM wrappers. Rigorous multi-dimensional evaluation with statistical validation (95% CI, Cohen's h) and reproducible methodology. Separates architectural theater from real systems through stress testing, network resilience, and failure analysis.

  • Updated Aug 8, 2025
  • Python

🤖 A curated list of resources for testing AI agents - frameworks, methodologies, benchmarks, tools, and best practices for ensuring reliable, safe, and effective autonomous AI systems

  • Updated May 28, 2025

🛠️ Discover and explore over 50 benchmarks for AI agents across key categories, enhancing evaluation of function calling, reasoning, coding, and interactions.

  • Updated Nov 16, 2025

Improve this page

Add a description, image, and links to the agent-evaluation topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the agent-evaluation topic, visit your repo's landing page and select "manage topics."

Learn more