Multimodal Control for Human-Robot Cooperation
Résumé
For intuitive human-robot collaboration, the robot must quickly adapt to the human behavior. To this end, we propose a multimodal sensor-based control framework, enabling a robot to recognize human intention, and consequently adapt its control strategy. Our approach is marker-less, relies on a Kinect and on an on-board camera, and is based on a unified task formalism. Moreover, we validate it in a mock-up industrial scenario, where human and robot must collaborate to insert screws in a flank.
Domaines
| Origine | Fichiers produits par l'(les) auteur(s) |
|---|---|
| Licence |
