HUMAVIPS http://humavips.inrialpes.fr Mon, 16 Dec 2013 12:23:06 +0000 http://wordpress.org/?v=2.8.6 en hourly 1 Best Paper Award at IEEE MMSP’13 http://humavips.inrialpes.fr/2013/10/04/best-paper-award-at-ieee-mmsp13/ http://humavips.inrialpes.fr/2013/10/04/best-paper-award-at-ieee-mmsp13/#comments Fri, 04 Oct 2013 14:53:46 +0000 Radu Horaud http://humavips.inrialpes.fr/?p=1352 The article “Alignment of Binocular-Binaural Data Using . . . → Read More: Best Paper Award at IEEE MMSP’13]]> The article “Alignment of Binocular-Binaural Data Using a Moving Audio-Visual Target” received the “Best Paper Award” at the IEEE International Workshop on Multimedia Signal Processing (MMSP’13), Pula, Italy, September-October 2013. The paper is authored by Vasil Khalidov (IDIAP), Radu Horaud (INRIA) and Florence Forbes (INRIA).

The paper addresses the problem of of aligning visual and auditory  data using a sensor that is composed of a camera-pair and a microphone-pair. The original contribution of the paper is a method for audio-visual data aligning through estimation of the 3D positions of the microphones in the visual centred coordinate frame defined by the stereo camera-pair.

The paper can be downloaded here: Alignment of Binocular-Binaural Data Using a Moving Audio-Visual Target.

]]>
http://humavips.inrialpes.fr/2013/10/04/best-paper-award-at-ieee-mmsp13/feed/ 0
Adapting Robot Behavior to Group Composition & Group Engagement http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/ http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/#comments Thu, 04 Apr 2013 12:14:42 +0000 jwienke http://humavips.inrialpes.fr/?p=1338 In order to achieve a smooth interaction, it is necessary for a robot to . . . → Read More: Adapting Robot Behavior to Group Composition & Group Engagement]]> In order to achieve a smooth interaction, it is necessary for a robot to adapt as best as possible to the current group of interacting people. This includes the adaptation to the composition of the visitor group, to the current level of fluctuation of the visitors and to changes relevant to the engagement of the visitors in the current interaction. Several perceptual or behavioral abilities are necessary for realizing this, including the ability to detect group characteristics such as the group size or the age range of its members or the ability to detect indicators for the engagement of the group, such as the interest its members are showing towards the robot.

In order to collect and aggregate the relevant cues and for building and maintaining hypotheses about the group state and composition, Bielefeld University developed a new component in the HUMAVIPS project called the GroupManager. This component receives results from several perception components (such as face detection/tracking, visual focus estimation or face classification) as input cues, aggregates them (e.g. by generating sliding windows of historical information or combining several cues) and calculates several derived measures based on this aggregated data. This serves as a stabilization and layer of abstraction necessary for higher-level components making decisions on how the robot should adapt its behavior.

This video shows some highlights from an interactive demonstration incorporating the results from the GroupManager component.

Adapting Robot Behavior to Group Composition & Group Engagement

The video is available on our youtube channel.

]]>
http://humavips.inrialpes.fr/2013/04/04/adapting-robot-behavior-to-group-composition-group-engagement/feed/ 0
Outstanding Paper Award at ICMI’12 http://humavips.inrialpes.fr/2013/03/22/outstanding-paper-award-at-icmi%e2%80%9912/ http://humavips.inrialpes.fr/2013/03/22/outstanding-paper-award-at-icmi%e2%80%9912/#comments Fri, 22 Mar 2013 15:30:45 +0000 Radu Horaud http://humavips.inrialpes.fr/?p=1326 The article “Linking Speaking and Looking Behavior Patterns with Group . . . → Read More: Outstanding Paper Award at ICMI’12]]> The article “Linking Speaking and Looking Behavior Patterns with Group Composition, Perception, and Performance” received one of the “Outstanding Paper Awards” (best papers) at the IEEE/ACM 14th International Conference on Multimodal Interaction (ICMI’12), Santa-Monica, CA USA, October 2012. The paper is authored by Dineshbabu Jayagopi, Dairazalia Sanchez-Cortes, Kazuhiro Otsuka, Junji Yamato, and Daniel Gatica-Perez (IDIAP)

This paper addresses the task of mining typical behavioral
patterns from small group face-to-face interactions and linking them to social-psychological group variables

This paper addresses the task of mining typical behavioral patterns from small group face-to-face interactions and linking them to social-psychological group variables. The paper can be downloaded here: Linking Speaking and Looking Behavior Patterns with Group Composition, Perception, and Performance

]]>
http://humavips.inrialpes.fr/2013/03/22/outstanding-paper-award-at-icmi%e2%80%9912/feed/ 0
Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition http://humavips.inrialpes.fr/2013/03/14/emergent-leaders-through-looking-and-speaking-from-audio-visual-data-to-multimodal-recognition/ http://humavips.inrialpes.fr/2013/03/14/emergent-leaders-through-looking-and-speaking-from-audio-visual-data-to-multimodal-recognition/#comments Thu, 14 Mar 2013 09:59:01 +0000 odobez http://humavips.inrialpes.fr/?p=1311 D. Sanchez-Cortes and O. Aran and D.  Jayagopi and M.  Schmid Mast and D. . . . → Read More: Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition]]> D. Sanchez-Cortes and O. Aran and D.  Jayagopi and M.  Schmid Mast and D. Gatica-Perez

Journal on Multimodal User Interfaces, Special Issue on Multimodal Corpora, published online Aug. 2012

Emergent leaders through looking and speaking: from audio-visual data to multimodal recognition

]]>
http://humavips.inrialpes.fr/2013/03/14/emergent-leaders-through-looking-and-speaking-from-audio-visual-data-to-multimodal-recognition/feed/ 0
Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog http://humavips.inrialpes.fr/2013/03/14/robot-to-group-interaction-in-a-vernissage-architecture-and-dataset-for-multi-party-dialog/ http://humavips.inrialpes.fr/2013/03/14/robot-to-group-interaction-in-a-vernissage-architecture-and-dataset-for-multi-party-dialog/#comments Thu, 14 Mar 2013 09:57:05 +0000 odobez http://humavips.inrialpes.fr/?p=1307 D. Klotz, J. Wienke, B. Wrede, S. Wrede, S. Sheikhi, D. Jayagopi, V. Khalidov, . . . → Read More: Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog]]> D. Klotz, J. Wienke, B. Wrede, S. Wrede, S. Sheikhi, D. Jayagopi, V. Khalidov, J.-M. Odobez

Proc. of the CogSys conference 2012

Robot-to-Group Interaction in a Vernissage: Architecture and Dataset for Multi-Party Dialog

]]>
http://humavips.inrialpes.fr/2013/03/14/robot-to-group-interaction-in-a-vernissage-architecture-and-dataset-for-multi-party-dialog/feed/ 0
Recognizing the Visual Focus of Attention for Human Robot Interaction http://humavips.inrialpes.fr/2013/03/14/recognizing-the-visual-focus-of-attention-for-human-robot-interaction/ http://humavips.inrialpes.fr/2013/03/14/recognizing-the-visual-focus-of-attention-for-human-robot-interaction/#comments Thu, 14 Mar 2013 09:47:15 +0000 odobez http://humavips.inrialpes.fr/?p=1293 S. Sheikhi and V. Khalidov and J.-M. Odobez

IROS workshop on Human Behavior Understanding, . . . → Read More: Recognizing the Visual Focus of Attention for Human Robot Interaction]]> S. Sheikhi and V. Khalidov and J.-M. Odobez

IROS workshop on Human Behavior Understanding, Vilamoura 2012

Recognizing the Visual Focus of Attention for Human Robot Interaction

]]> http://humavips.inrialpes.fr/2013/03/14/recognizing-the-visual-focus-of-attention-for-human-robot-interaction/feed/ 0
Linking speaking and looking behavior patterns with group composition, perception, and performance http://humavips.inrialpes.fr/2013/03/14/linking-speaking-and-looking-behavior-patterns-with-group-composition-perception-and-performance/ http://humavips.inrialpes.fr/2013/03/14/linking-speaking-and-looking-behavior-patterns-with-group-composition-perception-and-performance/#comments Thu, 14 Mar 2013 09:45:33 +0000 odobez http://humavips.inrialpes.fr/?p=1289 Jayagopi, D. and Sanchez-Cortes, D. and Otsuka, K. and Yamato, J. and Gatica-Perez, D.

. . . → Read More: Linking speaking and looking behavior patterns with group composition, perception, and performance]]>
Jayagopi, D. and Sanchez-Cortes, D. and Otsuka, K. and Yamato, J. and Gatica-Perez, D.

Outstanding Paper Award, Proceedings of the 14th ACM International Conference on
Multimodal interaction, Santa Monica, USA

Linking speaking and looking behavior patterns with group composition, perception, and performance

]]>
http://humavips.inrialpes.fr/2013/03/14/linking-speaking-and-looking-behavior-patterns-with-group-composition-perception-and-performance/feed/ 0
A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking http://humavips.inrialpes.fr/2013/03/14/a-track-creation-and-deletion-framework-for-long-term-online-multi-face-tracking/ http://humavips.inrialpes.fr/2013/03/14/a-track-creation-and-deletion-framework-for-long-term-online-multi-face-tracking/#comments Thu, 14 Mar 2013 09:43:03 +0000 odobez http://humavips.inrialpes.fr/?p=1281 S. Duffner and J.-M. Odobez

A Track Creation and Deletion Framework for Long-Term Online . . . → Read More: A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking]]> S. Duffner and J.-M. Odobez

A Track Creation and Deletion Framework for Long-Term Online Multi-Face Tracking

IEEE Transaction on Image Processing, March 2013

]]> http://humavips.inrialpes.fr/2013/03/14/a-track-creation-and-deletion-framework-for-long-term-online-multi-face-tracking/feed/ 0
Gaze estimation from multimodal Kinect data http://humavips.inrialpes.fr/2013/03/14/gaze-estimation-from-multimodal-kinect-data/ http://humavips.inrialpes.fr/2013/03/14/gaze-estimation-from-multimodal-kinect-data/#comments Thu, 14 Mar 2013 09:41:18 +0000 odobez http://humavips.inrialpes.fr/?p=1277 K. Funes and J.-M. Odobez

CVPR Workshop on Face and Gesture and Kinect demonstration . . . → Read More: Gaze estimation from multimodal Kinect data]]> K. Funes and J.-M. Odobez

CVPR Workshop on Face and Gesture and Kinect demonstration competition, Providence, USA, 2012

Gaze estimation from multimodal Kinect data

]]> http://humavips.inrialpes.fr/2013/03/14/gaze-estimation-from-multimodal-kinect-data/feed/ 0
Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions http://humavips.inrialpes.fr/2013/03/14/given-that-should-i-respond-contextual-addressee-estimation-in-multi-party-human-robot-interactions/ http://humavips.inrialpes.fr/2013/03/14/given-that-should-i-respond-contextual-addressee-estimation-in-multi-party-human-robot-interactions/#comments Thu, 14 Mar 2013 09:36:18 +0000 odobez http://humavips.inrialpes.fr/?p=1268 D. Jayagopi and J.-M. Odobez

Given that, Should I Respond? Contextual Addressee Estimation in . . . → Read More: Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions]]> D. Jayagopi and J.-M. Odobez

Given that, Should I Respond? Contextual Addressee Estimation in Multi-Party Human-Robot Interactions

Human Robot Interaction (HRI) Conference, Tokyo.

]]> http://humavips.inrialpes.fr/2013/03/14/given-that-should-i-respond-contextual-addressee-estimation-in-multi-party-human-robot-interactions/feed/ 0