OceanBench is a benchmarking tool to evaluate ocean forecasting systems against reference ocean analysis datasets (such as 2024 GLORYS reanalysis and GLO12 analysis) as well as observations.
The official score table is available on the OceanBench website.
The evaluation of a system consists of the sequential execution of a Python notebook that runs several evaluation methods against a set of forecasts (produced by the system), namely the challenger dataset, opened as an xarray Dataset.
The OceanBench documentation describes the shape a challenger dataset must have, as well as the definitions of the methods used to evaluate systems.
All official challenger notebooks are maintained and remain executable in order to update the scores with new OceanBench versions (all official challengers are re-evaluated with each new version).
To officially submit your system to OceanBench, please open an issue on this repository attaching one of the following:
- The executed notebook resulting from an interactive or programmatic evaluation.
- A way to access the system output data in a standard format (e.g. Zarr or NetCDF).
- A way to execute the system code or container along with clear instructions for how to run it (e.g., input/output format, required dependencies, etc.).
In addition, please provide the following metadata:
- The organization that leads the construction or operation of the system.
- A link to the reference paper of the system.
- The system method. For example, "Physics-based", "ML-based" or "Hybrid".
- The system type. For example, "Forecast (deterministic)" or "Forecast (ensemble)".
- The system initial conditions. For example, "GLO12/IFS".
- The approximate horizontal resolution of the system. For example, "1/12°" or "1/4°".
Checkout this notebook that evaluates a sample (two forecasts) of the GLONET system on OceanBench. The resulting executed notebook is used as the evaluation report of the system, and its content is used to fulfill the OceanBench score table.
You can replace the cell that opens the challenger datasets with your code and execute the notebook.
You will need to install OceanBench manually in your environment.
pip install oceanbenchgit clone [email protected]:mercator-ocean/oceanbench.git && cd oceanbench/ && pip install --editable .You can open and manually execute the example notebook in EDITO datalab by clicking here:
Once installed, you can evaluate your system using python with the following code:
import oceanbench
oceanbench.evaluate_challenger("path/to/file/opening/the/challenger/datasets.py", "notebook_report_name.ipynb")More details in the documentation.
Running OceanBench to evaluate systems with 1/12° resolution uses the Copernicus Marine Toolbox and therefore requires authentication with the Copernicus Marine Service.
If you're running OceanBench in a non-interactive way, please follow the Copernicus Marine Toolbox documentation to login to the Copernicus Marine Service before running the bench.
Your help to improve OceanBench is welcome. Please first read contribution instructions here.
Licensed under the EUPL-1.2 license.
Implemented by:
As part of a fruitful collaboration with:
Powered by: