From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Access content directly
Conference Papers Year : 2018

From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions

Abstract

A robot, in order to be autonomous, needs some kind of representation of its surrounding environment. From a general point of view, basic robotic tasks (such as localization, mapping, object handling, etc.) can be carried out with only very simple geometric primitives, usually extracted from raw sensor data. But whenever an interaction with a human being is involved, robots must have an understanding of concepts expressed in human natural language. In most approaches, this is done through a prebuilt ontology. In this paper, we try to bridge the gap between data driven methods and semantic based approaches by introducing a 3-layer environment model based on “instances”: sensor data based observations of concepts stored in a knowledge graph. We will focus on our original object-oriented ontology construction and illustrate the flow of our model in a simple showcase.
Fichier principal
Vignette du fichier
publication_roman_main_correction.pdf (3.61 Mo) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

lirmm-01926183 , version 1 (05-10-2020)

Identifiers

Cite

Yohan Breux, Sébastien Druon, René Zapata. From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions. RO-MAN: Robot and Human Interactive Communication, Aug 2018, Nanjing, China. pp.672-677, ⟨10.1109/ROMAN.2018.8525527⟩. ⟨lirmm-01926183⟩
144 View
120 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More