Multimodal Control for Human-Robot Cooperation
Abstract
For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.
Domains
Robotics [cs.RO]
Fichier principal
2013_iros_cherubini-Multimodal_Control_for_Human_Robot_Cooperation.pdf (994.29 Ko)
Télécharger le fichier
Origin | Files produced by the author(s) |
---|
Loading...