Sample-Efficient Learning of Soft Task Priorities Through Bayesian Optimization
Abstract
In recent optimization task-space controller, hierarchical task prioritization can be made strict or soft within a given level. Soft hierachization is made using task weighting. Yet the latter is not automated and weights are set ad-hoc. This empirical approach could be time-consuming and even leads to an infeasible result. During a specific episode in order to approximate the evolution of the weight of a task, we assign a Radial basis function network(RBFN) to each of the tasks. We use the Bayesian Optimization procedure to regulate the RBFNs corresponding to different tasks based on performances indexes that are extracted for a fixed episode. We benchmark the proposed solution with a dual-arm manipulation simulation where multiple potentially conflicting tasks are involved. First of all We can find that the proposed approach outperforms a hand-tuned controller in terms of tracking errors. In comparison with tuning the weights using another stochastic optimization technique, i.e. CMA-ES, we can find that the proposed approach requires much less samples to evaluate.