Currently, verifying the correct working of backends and the compatibility of models trained on previous Annif versions is a manual process, typically performed by running the annif eval command by hand. This manual approach is time-consuming and error-prone, especially before new version releases.
This issue proposes automating the verification process to ensure:
- Automated execution of
annif eval (and other relevant commands) for all supported backends and algorithms
- Systematic testing of model compatibility for models trained with previous Annif versions
- Integration of these automated checks into CI pipelines or as dedicated test scripts
- Documentation of the automated verification process for maintainers
Automating these steps will improve reliability, reduce manual effort, and help catch regressions or compatibility issues earlier in the release cycle.