Humanoids with Auditory and Visual Abilities In Populated Spaces

HUMAVIPS was a three-year European project (1 February 2010 – 31 January 2013)

Read Horizon Magazine article about the HUMAVIPS project

Scientific coordinator: Radu Horaud

Humanoids expected to collaborate with people should be able to interact with them in the most natural way. This involves significant perceptual and interactive skills, operating in a coordinated fashion. Consider a social gathering scenario where a humanoid is expected to possess certain social skills. It should be able to analyze a populated space, to localize people, and to determine whether they are looking at the robot and are speaking to it. Humans appear to solve these tasks routinely by integrating the often complementary information provided by multi-sensory data processing, from 3D object positioning and sound-source localization to gesture recognition. Understanding the world from unrestricted sensorial data, recognizing people’s intentions and behaving like them are extremely challenging problems.

The objective of HUMAVIPS has been to endow humanoid robots with audiovisual (AV) abilities: exploration, recognition, and interaction, such that they exhibit adequate behavior when dealing with a group of people. Developed research and technological developments have emphasized the role played by multimodal perception within principled models of human-robot interaction and of humanoid behavior. An adequate architecture  has implemented auditory and visual skills onto a fully programmable humanoid robot (the consumer robot NAO). A free and open-source software platform has been developed to foster dissemination and to ensure exploitation of the outcomes of  HUMAVIPS beyond its lifetime.

The HUMAVIPS researchers received several awards at ICMI’11, ICMI’12 and MMSP’13.

The technical annexThe project’s final report | EU funding: 2.62M€