Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@neofight78
Copy link
Contributor

@neofight78 neofight78 commented May 15, 2025

Resolves #38

Let me know if you want any changes

Summary by CodeRabbit

  • New Features

    • Added support for an "eval" command to request evaluation scores from supported chess engines.
    • Introduced a method to retrieve the most recent evaluation score for a given engine.
  • Tests

    • Added tests to verify evaluation command behavior and score retrieval for different engines, including error handling for unsupported engines.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented May 15, 2025

Walkthrough

A new non-standard UCI command CmdEval was introduced to support Stockfish's eval command. The Engine struct now includes an eval field and a thread-safe Eval() accessor. A test verifies correct evaluation parsing for Stockfish and error handling for unsupported engines like lc0.

Changes

File(s) Change Summary
uci/cmd.go Added CmdEval command for Stockfish's eval, including parsing and error handling logic.
uci/engine.go Added eval field and Eval() method to Engine for storing and retrieving evaluation scores.
uci/engine_test.go Added Test_EngineEval to verify evaluation for Stockfish and error handling for lc0.

Sequence Diagram(s)

sequenceDiagram
    participant Test as Test_EngineEval
    participant Engine as Engine
    participant Stockfish as Stockfish Engine

    Test->>Engine: New()
    Test->>Engine: Run(CmdUci, CmdIsReady, CmdUcinewgame, CmdSetPosition, CmdEval)
    Engine->>Stockfish: Send "eval" command
    Stockfish-->>Engine: Output lines
    Engine->>Engine: Parse "Final evaluation" line, update eval
    Test->>Engine: Eval()
    Engine-->>Test: Return eval value
Loading

Assessment against linked issues

Objective Addressed Explanation
Add support for Stockfish's eval command, including a command, result parsing, and accessor ( #38 ) βœ…

Poem

In the warren of code, a new path unfurled,
Stockfish’s secrets, at last, are twirled.
Eval commands hop in, results now in view,
With thread-safe paws, we measure what’s true.
A test for each engine, a leap for the bestβ€”
πŸ‡ CodeRabbit’s eval, ahead of the rest!

Note

⚑️ AI Code Reviews for VS Code, Cursor, Windsurf

CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback.
Learn more here.


Note

⚑️ Faster reviews with caching

CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 16th. To opt out, configure Review - Disable Cache at either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off the Data Retention setting under your Organization Settings.
Enjoy the performance boostβ€”your workflow just got faster.

✨ Finishing Touches
  • πŸ“ Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share
πŸͺ§ Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
uci/cmd.go (1)

123-144: Improve the CmdEval implementation with better magic number handling and error handling

The implementation works correctly for Stockfish, but has a few areas for improvement:

  1. Magic numbers should be named constants for better readability
  2. Error handling for the ParseFloat operation could be improved
  3. The implementation assumes a Stockfish-specific output format
// CmdEval is a non-standard command that requests the engine's static evaluation of the current position.
+const (
+    // evalIndexInParts is the index of the evaluation value in the space-separated output
+    evalIndexInParts = 2
+    // centipawnScale converts from decimal pawn values to centipawn integer values
+    centipawnScale = 100
+)

CmdEval = cmdNoOptions{Name: "eval", F: func(e *Engine) error {
    scanner := bufio.NewScanner(e.out)
    for scanner.Scan() {
        text := e.readLine(scanner)
        if strings.Contains(text, "error") {
            return errors.New("eval command not supported")
        }
        if strings.HasPrefix(text, "Final evaluation") {
            parts := strings.Fields(text)
-           if len(parts) >= 3 {
+           if len(parts) > evalIndexInParts {
-               evalStr := parts[2]
+               evalStr := parts[evalIndexInParts]
                eval, err := strconv.ParseFloat(evalStr, 64)
                if err == nil {
-                   e.eval = int(math.Round(eval * 100))
+                   e.eval = int(math.Round(eval * centipawnScale))
+               } else {
+                   return fmt.Errorf("failed to parse evaluation value: %w", err)
                }
                break
            }
        }
    }
    return nil
}}
🧰 Tools
πŸͺ› golangci-lint (1.64.8)

137-137: Magic number: 100, in detected

(mnd)


133-133: Magic number: 3, in detected

(mnd)

uci/engine_test.go (1)

22-60: LGTM: Good test coverage for eval functionality

The test thoroughly checks both successful evaluation with Stockfish and proper error handling with lc0, which doesn't support the eval command. The test covers:

  1. Setting up a specific chess position
  2. Running the eval command
  3. Verifying the expected outcomes for different engines

One minor suggestion: consider making the minimum expected evaluation value a named constant for better maintainability.

func Test_EngineEval(t *testing.T) {
+   const minExpectedEval = 500
    for _, name := range engines {
        fenStr := "4k3/8/8/8/8/8/8/4K2R w - - 0 1"

        t.Run("EngineEval_"+name, func(t *testing.T) {
            if !isEngineAvailable(name) {
                t.Skipf("engine %s not available", name)
            }

            pos := &chess.Position{}
            if err := pos.UnmarshalText([]byte(fenStr)); err != nil {
                t.Fatal("failed to parse FEN", err)
            }

            eng, err := uci.New(name, uci.Debug)
            if err != nil {
                t.Fatal(err)
            }
            defer eng.Close()

            cmdPos := uci.CmdPosition{Position: pos}
            err = eng.Run(uci.CmdUCI, uci.CmdIsReady, uci.CmdUCINewGame, cmdPos, uci.CmdEval)

            if name == "stockfish" {
                if err != nil {
                    t.Fatal("failed to run command", err)
                }

-               if eng.Eval() < 500 {
+               if eng.Eval() < minExpectedEval {
-                   t.Errorf("expected an eval greater than or equal to 500, got %d", eng.Eval())
+                   t.Errorf("expected an eval greater than or equal to %d, got %d", minExpectedEval, eng.Eval())
                }
            } else if name == "lc0" {
                if err == nil {
                    t.Fatal("expected an error", err)
                }
            }
        })
    }
}
πŸ“œ Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge Base: Disabled due to data retention organization setting

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between b42d60e and 6b84c16.

πŸ“’ Files selected for processing (3)
  • uci/cmd.go (2 hunks)
  • uci/engine.go (2 hunks)
  • uci/engine_test.go (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
uci/engine_test.go (2)
uci/engine.go (2)
  • New (46-64)
  • Debug (31-33)
uci/cmd.go (3)
  • CmdPosition (181-184)
  • CmdPosition (202-204)
  • CmdEval (124-144)
uci/cmd.go (1)
uci/engine.go (2)
  • Engine (15-27)
  • New (46-64)
πŸͺ› golangci-lint (1.64.8)
uci/cmd.go

137-137: Magic number: 100, in detected

(mnd)


133-133: Magic number: 3, in detected

(mnd)

πŸ”‡ Additional comments (3)
uci/engine.go (2)

25-25: LGTM: Adding evaluation field to Engine struct

The eval field is a good addition to store the evaluation score from chess engines.


128-132: LGTM: Thread-safe accessor method for evaluation score

The Eval() method provides a thread-safe way to access the evaluation score, properly using a read lock on the engine's mutex. This follows the same pattern as other accessor methods in the Engine struct.

uci/cmd.go (1)

7-7: LGTM: Added math import for rounding

Added math package import to support the math.Round function used for evaluation score processing.

@CorentinGS CorentinGS merged commit 3716e41 into CorentinGS:main May 16, 2025
7 checks passed
@CorentinGS
Copy link
Owner

Thank you very much for the contribution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for Stockfish's eval command

2 participants