A unified multimodal control framework for human–robot interaction

Andrea Cherubini 1 Robin Passama 1 Philippe Fraisse 1 André Crosnier 1
1 IDH - Interactive Digital Humans
LIRMM - Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier
Abstract : In human–robot interaction, the robot controller must reactively adapt to sudden changes in the environment (due to unpredictable human behaviour). This often requires operating different modes, and managing sudden signal changes from heterogeneous sensor data. In this paper, we present a multimodal sensor-based controller, enabling a robot to adapt to changes in the sensor signals (here, changes in the human collaborator behaviour). Our controller is based on a unified task formalism, and in contrast with classical hybrid visicn–force–position control, it enables smooth transitions and weighted combinations of the sensor tasks. The approach is validated in a mock-up industrial scenario, where pose, vision (from both traditional camera and Kinect), and force tasks must be realized either exclusively or simultaneously, for human–robot collaboration.
Document type :
Journal articles
Complete list of metadatas

Cited literature [41 references]  Display  Hide  Download

https://hal-lirmm.ccsd.cnrs.fr/lirmm-01222976
Contributor : Isabelle Gouat <>
Submitted on : Monday, November 9, 2015 - 1:25:10 PM
Last modification on : Friday, April 12, 2019 - 5:20:05 PM
Long-term archiving on : Wednesday, February 10, 2016 - 10:25:00 AM

File

J8-multimodal control for huma...
Files produced by the author(s)

Identifiers

Collections

Citation

Andrea Cherubini, Robin Passama, Philippe Fraisse, André Crosnier. A unified multimodal control framework for human–robot interaction. Robotics and Autonomous Systems, Elsevier, 2015, 70, pp.106-115. ⟨10.1016/j.robot.2015.03.002⟩. ⟨lirmm-01222976⟩

Share

Metrics

Record views

1616

Files downloads

1259