Gesture Recognition

This video is from a live demonstration on June 11, 2018. It shows a person directing an avatar to build block structures (in this case, a staircase). The user (Rahul) can gesture and speak; the avatar (Diana) can gesture, speak, and move blocks.Ther user's goal is to build the block structure; the avatar's goal is to teach the user how to use the system more effectively. These two goals are related, but not the same. At the beginning of the run, Diana (the avatar) doesn't know what Guru (the user) knows. So she spends a lot if time asking Guru for confirmation and showing him alternate ways to accomplish goals. For example, she shoes him that "yes" can be spoken or signaled through a "thumbs up" gestures. As the trial progresses, Guru's knowledge (and Diana's knowledge of Guru's knowledge) improves, and blocks are moved faster. He does have a little problem with the small purple block at the end, however.

The demonstration was joint work with James Pustejovsky and his lab at Brandeis (who created Diana and her reasoning system), and Jaime Ruiz and his lab at the University of Florida (who elicited the gesture set from naive users). CSU built the real-time gesture recognition system.

More Communication through Gestures, Expressions and Shared Perception

Action Recognition

Play the video below to see an example of earlier work in action recognition. This was joint work in the summer of 2012 with iRobot and U.C. Berkeley as part of the DARPA Mind's Eye project. (Many thanks to Dr. Christopher Geyer and the folks at iRobot for producing this video).

More Mind's Eye