Introducing Geometric Constraint Expressions Into Robot Constrained Motion Specification and Control
Abstract
Thé problem of robotic task definition and execution was pioneered by Mason, [1], who defined setpoint constraints where the position, velocity, and/or forces are expressed in one particular task frame for a 6-DOF robot. Later extensions generalized this approach to constraints in i) multiple frames, ii) redundant robots, iii) other sensor spaces such as cameras, and iv) trajectory tracking. Our work extends tasks definition to i) expressions of constraints, with a focus on expressions between geometric entities (distances and angles), in place of explicit set-point constraints, ii) a systematic composition of constraints, iii) runtime monitoring of all constraints (that allows for runtime sequencing of constraint sets via, for example, a Finite State Machine), and iv) formal task descriptions, that can be used by symbolic reasoners to plan and analyse tasks. This means that tasks are seen as ordered groups of constraints to be achieved by the robot's motion controller, possibly with different set of geometric expressions to measure outputs which are not controlled, but are relevant to assess the task evolution. Those monitored expressions may result in events that trigger switching to another ordered group of constraints to execute and monitor. For these task specifications, formal language definitions are introduced in the JSON-schema modeling language.
Origin | Files produced by the author(s) |
---|
Loading...