From Perception to Semantics: An Environment Representation Model Based on Human-Robot Interactions
Abstract
A robot, in order to be autonomous, needs some kind of representation of its surrounding environment. From a general point of view, basic robotic tasks (such as localization, mapping, object handling, etc.) can be carried out with only very simple geometric primitives, usually extracted from raw sensor data. But whenever an interaction with a human being is involved, robots must have an understanding of concepts expressed in human natural language. In most approaches, this is done through a prebuilt ontology. In this paper, we try to bridge the gap between data driven methods and semantic based approaches by introducing a 3-layer environment model based on “instances”: sensor data based observations of concepts stored in a knowledge graph. We will focus on our original object-oriented ontology construction and illustrate the flow of our model in a simple showcase.
Domains
Robotics [cs.RO]Origin | Files produced by the author(s) |
---|
Loading...