Daniele Rege Cambrin1 · Gabriele Scaffidi1 · Luca Colomba1
Giovanni Malnati1 · Daniele Apiletti1 · Paolo Garza1
1Politecnico di Torino, Italy
In this work, we propose an automated game-testing solution to evaluate the quality of game tutorials. Our approach leverages VLMs to analyze frames from video game tutorials, answer relevant questions to simulate human perception, and provide feedback. This feedback is compared with expected results to identify confusing or problematic scenes and highlight potential errors for developers.
Install the dependencies for the model as declared in the respective repositories. Use generate.ipynb to generate the answers using a desired model. The models folder contains wrappers for the tested models.
The dataset is available on HuggingFace. For more information about the data, refer to the HuggingFace page.
The repository setup is by Luca Colomba and Daniele Rege Cambrin.
This project is licensed under the Apache 2.0 license. See LICENSE for more information.
If you find this project useful, please consider citing:
@inbook{RegeCambrin2025,
title = {Level Up Your Tutorials: VLMs for Game Tutorials Quality Assessment},
ISBN = {9783031923876},
ISSN = {1611-3349},
url = {http://dx.doi.org/10.1007/978-3-031-92387-6_26},
DOI = {10.1007/978-3-031-92387-6_26},
booktitle = {Computer Vision – ECCV 2024 Workshops},
publisher = {Springer Nature Switzerland},
author = {Rege Cambrin, Daniele and Scaffidi Militone, Gabriele and Colomba, Luca and Malnati, Giovanni and Apiletti, Daniele and Garza, Paolo},
year = {2025},
pages = {374–389}
}