Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@yatishdurga
Copy link

@yatishdurga yatishdurga commented Dec 7, 2025

Which issue(s) does this pull-request address?

Closes: #

Description

Checklist

General

Code quality checks

  • Code quality checks pass: mise check (mise fix to auto-fix)

Testing

  • Unit tests pass: mise test:unit
  • E2E tests pass: mise test:e2e
  • Tests are included (for bug fixes or new features)

Documentation

  • Documentation is updated
  • Embedme embeds code examples in docs. To update after edits, run: Python mise docs:fix

Updated links and references to DeepEval documentation for clarity and accuracy.

Signed-off-by: yatishdurga <[email protected]>
@yatishdurga yatishdurga requested a review from a team as a code owner December 7, 2025 06:47
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Dec 7, 2025
@github-actions github-actions bot added documentation Improvements or additions to documentation python Python related functionality labels Dec 7, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @yatishdurga, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a new, detailed tutorial document that outlines the process of evaluating the RequirementAgent within the BeeAI framework using the DeepEval open-source library. The tutorial provides a step-by-step guide, covering everything from setting up the agent and defining 'Golden' test cases to running evaluations with GEval metrics and interpreting the outcomes. Its purpose is to equip users with the knowledge to ensure the consistent and reliable behavior of their agents through automated testing.

Highlights

  • New DeepEval Tutorial: A comprehensive tutorial has been added that guides users through evaluating the RequirementAgent using the DeepEval framework. This covers the entire evaluation pipeline from defining test cases to interpreting results.
  • Evaluation Architecture Explained: The tutorial details the evaluation architecture, outlining the steps: Golden test cases, running the RequirementAgent, converting responses to a DeepEval dataset, applying GEval metrics, and analyzing results.
  • Agent Configuration for Evaluation: It provides instructions on how to configure a RequirementAgent specifically for evaluation, including setting up the underlying LLM, tools, memory, and behavior notes.
  • Golden Test Case Definition: The tutorial explains the concept of 'Golden' test cases, emphasizing the importance of defining inputs, expected outputs, and expected tool usage for robust evaluation.
  • Data Processing and Metrics: It covers how create_dataset processes Goldens into DeepEval test cases, how run_agent extracts tool calls and reasoning, and how GEval metrics are defined using natural language criteria for LLM-as-a-judge evaluation.
  • Evaluation Execution and Interpretation: Instructions are provided for running the evaluation using evaluate_dataset, interpreting pass/fail thresholds, and understanding score meanings, along with additional topics like caching and debug modes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds a new tutorial for using DeepEval with BeeAI's RequirementAgent. The document is well-structured and provides a clear, step-by-step guide for setting up an evaluation pipeline. My review focuses on ensuring the accuracy and consistency of the provided links to external resources and source code. I've identified a few links that point to personal forks or outdated domains, which should be corrected to provide a better experience for users following the tutorial. Overall, this is a valuable addition to the documentation.

### **Parallel execution**
`create_dataset` uses `asyncio.gather` to run multiple evaluations in parallel, which speeds up testing.

- [`BeeAI eval utils (_utils.py)`](<https://github.com/yatishdurga/beeai-framework/blob/main/python/eval/_utils.py>)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The link for BeeAI eval utils (_utils.py) points to a personal fork of the repository (yatishdurga/beeai-framework) instead of the main i-am-bee/beeai-framework repository. This can be confusing and lead users to an unofficial or outdated version of the code. Please update the link to point to the correct repository.

- [`BeeAI eval utils (_utils.py)`](<https://github.com/i-am-bee/beeai-framework/blob/main/python/eval/_utils.py>)  

- Include negative cases (ambiguous inputs requiring clarification)
- Add multilingual tests if the agent supports multiple languages

[`DeepEval Golden docs`](https://docs.confident-ai.com/docs/datasets)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The link to "DeepEval Golden docs" points to an old domain (docs.confident-ai.com). While it may redirect for now, it's best to update it to the current DeepEval domain to avoid future breakage and ensure users land on the most up-to-date documentation.

[`DeepEval Golden docs`](https://docs.deepeval.com/docs/datasets-and-test-cases)

- Include edge cases
- Include “failure modes”

- [`BeeAI environment config docs`](https://agentstack.beeai.dev/introduction/quickstart#configure-the-llm-provider)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The link for "BeeAI environment config docs" uses the agentstack.beeai.dev domain, which is inconsistent with other BeeAI documentation links in this file that use framework.beeai.dev. To maintain consistency and avoid confusion, please verify and use the correct, canonical domain for the documentation. framework.beeai.dev appears to be the current standard.

- [`BeeAI environment config docs`](https://framework.beeai.dev/introduction/quickstart#configure-the-llm-provider)  

Copy link
Contributor

@Tomas2D Tomas2D left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your work. However I don't see any code example showcasing how one can really do some sort of evaluation. This is a crucial part. The tutorial should be practical.

@yatishdurga
Copy link
Author

Thank you for the feedback @Tomas2D

I will update the documentation to include:
A complete example using RequirementAgent
Golden test definitions
Dataset creation via create_dataset
GEval metric configuration
A pytest-ready evaluation function using evaluate_dataset

Expanded the DeepEval tutorial with detailed evaluation steps, examples, and best practices for using RequirementAgent.

Signed-off-by: yatishdurga <[email protected]>
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Dec 30, 2025
@yatishdurga
Copy link
Author

Hi @Tomas2D ,
I have updated with code evalution https://github.com/yatishdurga/beeai-framework/blob/main/python/eval/deep_eval%20tutorial.mdx Please have a look at it .

Thank you .

@Tomas2D
Copy link
Contributor

Tomas2D commented Dec 31, 2025

Please update links in the tutorial so that they don't point to your repository. Use relative paths.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation python Python related functionality size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants