Using vision and haptic sensing for human-humanoid haptic joint actions - Archive ouverte HAL Access content directly
Conference Papers Year : 2013

Using vision and haptic sensing for human-humanoid haptic joint actions

(1) , (1) , (2, 1)
1
2

Abstract

Human-humanoid haptic joint actions are collaborative tasks requiring a sustained haptic interaction between both parties. As such, most research in this field has concentrated on how to use solely the robot's haptic sensing to extract the human partners' intentions. With this information, interaction controllers are designed. In this paper, the addition of visual sensing is investigated and a suitable framework is developed to accomplish this. This is then tested on examples of haptic joint actions namely collaboratively carrying a table. Additionally a visual task is implemented on top of this. In one case, the aim is to keep the table level taking into account gravity. In another case, a freely moving ball is balanced to keep it from falling off the table. The results of the experiments show that the framework is able to utilize both information sources properly to accomplish the task.
Fichier principal
Vignette du fichier
2013_cisram_agravante-using_vision_and_haptic_sensing_for_human_humanoid_joint_actions.pdf (679.92 Ko) Télécharger le fichier
Origin : Files produced by the author(s)
Loading...

Dates and versions

lirmm-00908439 , version 1 (22-11-2013)

Identifiers

Cite

Don Joven Agravante, Andrea Cherubini, Abderrahmane Kheddar. Using vision and haptic sensing for human-humanoid haptic joint actions. RAM: Robotics, Automation and Mechatronics, Nov 2013, Manila, Philippines. pp.13-18, ⟨10.1109/RAM.2013.6758552⟩. ⟨lirmm-00908439⟩
373 View
308 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More