Selecting and commanding individual robots in a multi-robot system can be a challenge: interactions typically occur over a conventional human-computer interface, or specialized remote control. Humans, however, can easily select and command one another in large groups using only eye contact and gestures. Can similar non-verbal communication channels be used for human-robot interactions?
In this work, we describe a novel human-robot interface designed to use face engagement as a method for selecting a particular robot from a group of robots. Face detection is performed by each robot; the resulting score of the detected face is used in a distributed leader election algorithm to guarantee a single robot is selected. Once selected, motion-based gestures are used to assign tasks to the robot. In our demonstration, robots are commanded to drive to one of two predefined locations.
Presentation slides
A set of powerpoint slides, including videos, describing this work:
[ppt slides]
Publications
Alex. Couture-Beil, Richard T. Vaughan, and Greg Mori. Selecting and commanding individual robots in a vision-based multi-robot system. Seventh Canadian Conference on Computer and Robot Vision (CRV), 2010.
[pdf]
Alex. Couture-Beil, Richard T. Vaughan, and Greg Mori. Selecting and commanding individual robots in a vision-based multi-robot system. HRI10: Proceedings of the 5th ACM/IEEE international conference on Human robot interaction (video session), March 2010.
[youtube,
mov (27MB),
pdf]
Mark Bayazit, Alex Couture-Beil, and Greg Mori. Real-time motion-based gesture recognition using the gpu.In Proceedings of the IAPR Conference on Machine Vision Applications (MVA'09), May 2009.
[More info, pdf]
Vision and Media Lab, Simon Fraser University
TASC 8000 and 8002,
8888 University Drive, Burnaby, BC, V5A 1S6,
Canada