Difference between revisions of "Human-Robot Gestual Interaction"

From AIRWiki
Jump to: navigation, search
m (New page: {{Project |title=Human-Robot Gestual Interaction |image=E2LateralHeadCutSmall.JPG |short_descr=Gestual interaction with people at an exhibition |coordinator=AndreaBonarini |tutor=AndreaBon...)
(No difference)

Revision as of 11:09, 7 July 2011

Human-Robot Gestual Interaction
Image of the project Human-Robot Gestual Interaction
Short Description: Gestual interaction with people at an exhibition
Coordinator: AndreaBonarini (andrea.bonarini@polimi.it)
Tutor: AndreaBonarini (andrea.bonarini@polimi.it)
Collaborator:
Students: DeborahZamponi (deborahzamponi@gmail.com), CristianMandelli (cristianmandelli@gmail.com)
Research Area: Robotics
Research Topic: Robot development
Start: 2011/07/1
End: 2012/03/31
Status: Active

The aim if this project is to develop effective interaction between the robot E-2? and people at an exhibition to convince them to be escorted to the home booth. Since the robot can hardly speak, but cannot understand anything, given the application context and the current technology, most of the interaction relies on gestures and movements of the robot.

E-2? has a kinect system on the head and this is the main source of information for this project.

The implementation will include libraries from the ROS community, together with reasoning modules both to make the collection of information more robust that with the vision algorithms only, and to select the best actions to do in the different situations.

The gestual interaction will play an important role both in the approaching phase, in the convincing phase and in the escorting phase.