In order to achieve a smooth interaction, it is necessary for a robot to adapt as best as possible to the current group of interacting people. This includes the adaptation to the composition of the visitor group, to the current level of fluctuation of the visitors and to changes relevant to the engagement of the visitors in the current interaction. Several perceptual or behavioral abilities are necessary for realizing this, including the ability to detect group characteristics such as the group size or the age range of its members or the ability to detect indicators for the engagement of the group, such as the interest its members are showing towards the robot.
In order to collect and aggregate the relevant cues and for building and maintaining hypotheses about the group state and composition, Bielefeld University developed a new component in the HUMAVIPS project called the GroupManager. This component receives results from several perception components (such as face detection/tracking, visual focus estimation or face classification) as input cues, aggregates them (e.g. by generating sliding windows of historical information or combining several cues) and calculates several derived measures based on this aggregated data. This serves as a stabilization and layer of abstraction necessary for higher-level components making decisions on how the robot should adapt its behavior.
This video shows some highlights from an interactive demonstration incorporating the results from the GroupManager component.