CORSMAL explored the fusion of multiple sensing modalities (touch, sound, and first and third person vision) to accurately and robustly estimate the physical properties of objects in noisy and potentially ambiguous environments. The project aimed to develop a framework for recognition and manipulation of objects via cooperation with humans by mimicking human capability of learning and adapting across a set of different manipulators, tasks, sensing configurations and environments. The project focus was to define learning architectures for multimodal sensory data and for aggregated data from different environments. The goal was to continually improve the adaptability and robustness of the learned models, and to generalise capabilities across tasks and sites.
Official website: https://corsmal.github.io
The website was originally hosted at http://corsmal.eecs.qmul.ac.uk/. This URL is still reachable.
- Queen Mary University of London, United Kingdom
- Sorbonne Université, France
- École polytechnique fédérale de Lausanne (EPFL), Switzerland
A CHIST-ERA project for the Call 2017 on the topic Object recognition and manipulation by robots: Data sharing and experiment reproducibility (ORMR). The project was funded by UK EPSRC grant EP/S031715/1, France ANR grant ANR-18-CHR3-0006, and Swiss NSF grant 20CH21_180444. The project run between February 2019 and December 2022.
More information: https://www.chistera.eu/projects/corsmal