API: try to make sure that API for metrics is not confusing#30
Merged
martinfleis merged 3 commits intomainfrom Dec 11, 2025
Merged
API: try to make sure that API for metrics is not confusing#30martinfleis merged 3 commits intomainfrom
martinfleis merged 3 commits intomainfrom
Conversation
Contributor
There was a problem hiding this comment.
Pull request overview
This PR improves the API for metrics in gwlearn by introducing clearer naming conventions that distinguish between three types of metrics: focal prediction (from single local models), pooled data from local models, and local scores per model. The main changes rename metric attributes to use explicit prefixes (focal_, pooled_, local_) to reduce confusion about how metrics are computed.
- Metric attributes renamed from generic names (e.g.,
score_) to prefixed versions (e.g.,focal_score_,pooled_oob_score_,local_score_) - Test files updated to reference the new metric names
- Configuration updates to pixi workspace and dependencies
Reviewed changes
Copilot reviewed 9 out of 13 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| pyproject.toml | Updates pixi configuration from [tool.pixi.project] to [tool.pixi.workspace] and adds osmnx/pandana dependencies |
| gwlearn/tests/test_search.py | Updates custom metrics references to use focal_ prefix |
| gwlearn/tests/test_linear_model.py | Updates test assertions for renamed local_pooled_* metrics to local_* |
| gwlearn/tests/test_ensemble.py | Updates test assertions for renamed oob_* metrics to pooled_oob_* |
| gwlearn/tests/test_base.py | Updates test assertions to use new focal_ prefixed metric names |
| gwlearn/linear_model.py | Updates docstrings and implementation to consistently use focal_, pooled_, and local_ prefixes |
| gwlearn/ensemble.py | Updates docstrings and implementation for OOB metrics to use pooled_oob_ and local_oob_ prefixes |
| gwlearn/base.py | Updates implementation to use focal_ prefix for metrics and documents the naming convention |
| docs/source/mgwr_comparison.ipynb | Notebook execution output updates (sklearn HTML representation changes) |
| .gitignore | Adds docs/source/cache/ to ignored paths |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
An attempt to mitigate the confusion of how is score and other metrics copmuted.
We have 3 ways:
The only question is how to make aliases for
score_and similar. mgwr is technically using what we call focal but one could also get the prediction using thepredictmethod, which uses the enseble of local models, not a single one. That would be the most robust way of evaluating the performance but also quite costly. So I'll make sure this is properly documented and users can do that themselves if they wish.