From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Accéder directement au contenu
Communication Dans Un Congrès Année : 2018

From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions

Résumé

A robot, in order to be autonomous, needs some kind of representation of its surrounding environment. From a general point of view, basic robotic tasks (such as localization, mapping, object handling, etc.) can be carried out with only very simple geometric primitives, usually extracted from raw sensor data. But whenever an interaction with a human being is involved, robots must have an understanding of concepts expressed in human natural language. In most approaches, this is done through a prebuilt ontology. In this paper, we try to bridge the gap between data driven methods and semantic based approaches by introducing a 3-layer environment model based on “instances”: sensor data based observations of concepts stored in a knowledge graph. We will focus on our original object-oriented ontology construction and illustrate the flow of our model in a simple showcase.
Fichier principal
Vignette du fichier
publication_roman_main_correction.pdf (3.61 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

lirmm-01926183 , version 1 (05-10-2020)

Identifiants

Citer

Yohan Breux, Sébastien Druon, René Zapata. From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions. RO-MAN: Robot and Human Interactive Communication, Aug 2018, Nanjing, China. pp.672-677, ⟨10.1109/ROMAN.2018.8525527⟩. ⟨lirmm-01926183⟩
143 Consultations
110 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More