HUMAVIPS » Demos http://humavips.inrialpes.fr Mon, 16 Dec 2013 12:23:06 +0000 http://wordpress.org/?v=2.8.6 en hourly 1 Adapting Robot Behavior to Group Composition & Group Engagement http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/ http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/#comments Thu, 04 Apr 2013 12:14:42 +0000 jwienke http://humavips.inrialpes.fr/?p=1338 In order to achieve a smooth interaction, it is necessary for a robot to . . . → Read More: Adapting Robot Behavior to Group Composition & Group Engagement]]> In order to achieve a smooth interaction, it is necessary for a robot to adapt as best as possible to the current group of interacting people. This includes the adaptation to the composition of the visitor group, to the current level of fluctuation of the visitors and to changes relevant to the engagement of the visitors in the current interaction. Several perceptual or behavioral abilities are necessary for realizing this, including the ability to detect group characteristics such as the group size or the age range of its members or the ability to detect indicators for the engagement of the group, such as the interest its members are showing towards the robot.

In order to collect and aggregate the relevant cues and for building and maintaining hypotheses about the group state and composition, Bielefeld University developed a new component in the HUMAVIPS project called the GroupManager. This component receives results from several perception components (such as face detection/tracking, visual focus estimation or face classification) as input cues, aggregates them (e.g. by generating sliding windows of historical information or combining several cues) and calculates several derived measures based on this aggregated data. This serves as a stabilization and layer of abstraction necessary for higher-level components making decisions on how the robot should adapt its behavior.

This video shows some highlights from an interactive demonstration incorporating the results from the GroupManager component.

Adapting Robot Behavior to Group Composition & Group Engagement

The video is available on our youtube channel.

]]>
http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/feed/ 0
Integrated Demonstrator: Engagement-based Multi-party Dialog with Visual Focus of Attention Extraction http://humavips.inrialpes.fr/2011/04/19/integrated-demonstrator-engagement-based-multi-party-dialog-with-visual-focus-of-attention-extraction/ http://humavips.inrialpes.fr/2011/04/19/integrated-demonstrator-engagement-based-multi-party-dialog-with-visual-focus-of-attention-extraction/#comments Tue, 19 Apr 2011 14:02:50 +0000 jwienke http://humavips.inrialpes.fr/?p=744 When a robot is situated in an environment containing multiple possible interaction partners, it . . . → Read More: Integrated Demonstrator: Engagement-based Multi-party Dialog with Visual Focus of Attention Extraction]]> When a robot is situated in an environment containing multiple possible interaction partners, it has to make decisions about when to engage specific users and how to detect and react appropriately to actions of the users that might signal the intention to interact.

In this demonstration we present the integration of an engagement model in a dialog system based on interaction patterns enabling the humanoid robot Nao to play a quiz game with multiple participants. The user’s intention to interact with the system is determined by his visual focus of attention and processing is based on a spatio-temporal working memory. The demonstrator system combines components from two partners in an integrated scenario using the Nao robot.

A video of the demonstration is available on the YouTube channel.

]]>
http://humavips.inrialpes.fr/2011/04/19/integrated-demonstrator-engagement-based-multi-party-dialog-with-visual-focus-of-attention-extraction/feed/ 0
WP3: Scene Flow http://humavips.inrialpes.fr/2011/03/29/wp3-scene-flow/ http://humavips.inrialpes.fr/2011/03/29/wp3-scene-flow/#comments Tue, 29 Mar 2011 07:28:49 +0000 Jan Cech http://humavips.inrialpes.fr/?p=728

The scene flow is low level computer vision algorithm for . . . → Read More: WP3: Scene Flow]]>

The scene flow is low level computer vision algorithm for estimating depth and motion in the scene.

The video shows resulting disparity and optical flow maps of our GCSFs algorithm [Cech-CVPR-2011] and a comparison with spatiotemporal stereo by [Sizintsev-2009] and variational optical flow by [Brox-2010]. We are not showing results of variational scene flow by [Huguet-2007] as in the paper, since it was extremely computationally intensive.

The maps are color coded. For disparity, warmer colors are closer to the camera. In optical flow, green color is zero motion, warmer colors is left and up motion, colder colors is right and down motion respectively. Black color denotes unmatched pixels.

The video contains 3 sequences: INRIA and INRIA 2, which are from CAVA dataset of INRIA (http://perception.inrialpes.fr/CAVA_Dataset/), and ETH, which is from a padestrian dataset of ETH Zurich (http://www.vision.ee.ethz.ch/Ëœaess/dataset/).

Please, find the comments on our results and comparison with other methods in the paper [Cech-2011] in Section 3.2.

References

[Cech-2011] J.Cech, J. Sanchez-Riera, R. Horaud.  Scene Flow Estimation by Growing Correspondence Seeds. In CVPR 2011.

[Sizintsev-2009] M. Sizintsev and R. P.Wildes. Spatiotemporal stereo via spatiotemporal quadratic element (stequel) matching. In CVPR, 2009.

[Brox-2010] T. Brox and J. Malik. Large displacement optical flow: descriptor matching in variational motion estimation. IEEE Trans. on PAMI, 2010. In press.

[Huguet-2007] F. Huguet and F. Devernay. A variational method for scene flow estimation from stereo sequences. In ICCV, 2007.

]]> http://humavips.inrialpes.fr/2011/03/29/wp3-scene-flow/feed/ 0
WP4: Gender Recognition http://humavips.inrialpes.fr/2011/03/28/wp4-gender-recognition/ http://humavips.inrialpes.fr/2011/03/28/wp4-gender-recognition/#comments Mon, 28 Mar 2011 21:18:19 +0000 Lukáš Cerman http://humavips.inrialpes.fr/?p=719 In this demo, Nao serves as a door dispatcher. It directs a person to . . . → Read More: WP4: Gender Recognition]]> In this demo, Nao serves as a door dispatcher. It directs a person to the right or to the left side based on her or his gender detected from the observed face.

Clipbrd2

Demo video on YouTube.

]]>
http://humavips.inrialpes.fr/2011/03/28/wp4-gender-recognition/feed/ 0
WP3: Robot Localization in Unknown Environment http://humavips.inrialpes.fr/2011/03/28/wp3-robot-localization-in-unknown-environment/ http://humavips.inrialpes.fr/2011/03/28/wp3-robot-localization-in-unknown-environment/#comments Mon, 28 Mar 2011 21:10:26 +0000 Michal Havlena http://humavips.inrialpes.fr/?p=700 In the first part of this demo, Nao walks around and captures images of . . . → Read More: WP3: Robot Localization in Unknown Environment]]> In the first part of this demo, Nao walks around and captures images of the new environment. At the end of his walk it uses the acquired images to build a sparse 3D model of the explored environment.

After Nao learns the new environment, it is being taken away from its position and put at some unknown place. Being left at an unknown position, Nao looks around until it finds a view that suffiently overlaps the explored environment. Then it is able to calculate its new position.

WP3: Robot Localization in Unknown Environment

Demo video on YouTube.

]]>
http://humavips.inrialpes.fr/2011/03/28/wp3-robot-localization-in-unknown-environment/feed/ 0
Perception-Action Loops with a Spatio-temporal Working Memory http://humavips.inrialpes.fr/2011/03/22/perception-action-loops-with-a-spatio-temporal-working-memory/ http://humavips.inrialpes.fr/2011/03/22/perception-action-loops-with-a-spatio-temporal-working-memory/#comments Tue, 22 Mar 2011 18:15:10 +0000 jwienke http://humavips.inrialpes.fr/?p=648 This scenario demonstrates a closed perception-action loop with the spatio-temporal working memory system m³s . . . → Read More: Perception-Action Loops with a Spatio-temporal Working Memory]]> This scenario demonstrates a closed perception-action loop with the spatio-temporal working memory system m³s on Nao. A standard face detection algorithm is used to find the directions of people around the robot. Due to the lack of a stereo-vision setup so far, a distance estimation is done based on the size of the bounding boxes of detected faces. The detected faces are then maintained in the memory as a basic representation of a person and the robot tries to follow persons that previously approached it, closing the perception-action loop. This demonstrator shows the integration of different system components using the memory architecture and verifies the applicability of features provided by the memory system.

A video showing the demonstrator in action is available in our YouTube channel.

]]>
http://humavips.inrialpes.fr/2011/03/22/perception-action-loops-with-a-spatio-temporal-working-memory/feed/ 0
Automatic calibration of an audiovisual robotic head http://humavips.inrialpes.fr/2011/02/15/automatic-calibration-of-an-audiovisual-robotic-head/ http://humavips.inrialpes.fr/2011/02/15/automatic-calibration-of-an-audiovisual-robotic-head/#comments Tue, 15 Feb 2011 20:26:12 +0000 Radu Horaud http://humavips.inrialpes.fr/?p=428 This demonstrations shows the ability to calibrate the visual and auditory sensors . . . → Read More: Automatic calibration of an audiovisual robotic head]]> This demonstrations shows the ability to calibrate the visual and auditory sensors that are available with a robotic head. Audio-visual calibration is necessary for allowing other higher-order tasks, such as audio-visual fusion, event detection, recognition, etc. In particular we will demonstrate the calibration of a setup composed of two stereoscopic cameras and four microphones. All the sensors (cameras and microphones) will be described in a common coordinate system: six degrees of freedom for each camera (3D position and 3D orientation) and three degrees of freedom for each microphone (3D position). This scenario will be demonstrated using the POPEYE audio-visual robotic head available at INRIA and which was developed within the FP6 STREP project “Perception on Purpose” (POP).

The calibration of an audio-visual robot head

The calibration of an audio-visual robot head

]]>
http://humavips.inrialpes.fr/2011/02/15/automatic-calibration-of-an-audiovisual-robotic-head/feed/ 0