Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Access content directly
Journal Articles Nonlinear Analysis: Hybrid Systems Year : 2008

Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning

Marc Ricordeau
  • Function : Author
  • PersonId : 938479
Michel Liquière
  • Function : Author
  • PersonId : 938480

Abstract

The generalization of policies in reinforcement learning is a main issue, both from the theoretical model point of view and for their applicability. However, generalizing from a set of examples or searching for regularities is a problem which has already been intensively studied in machine learning. Thus, existing domains such as Inductive Logic Programming have already been linked with reinforcement learning. Our work uses techniques in which generalizations are constrained by a language bias, in order to regroup similar states. Such generalizations are principally based on the properties of concept lattices. To guide the possible groupings of similar states of the environment, we propose a general algebraic framework, considering the generalization of policies through a partition of the set of states and using a language bias as an a priori knowledge. We give a practical application as an example of our theoretical approach by proposing and experimenting a bottom-up algorithm.

Dates and versions

lirmm-00378918 , version 1 (27-04-2009)

Identifiers

Cite

Marc Ricordeau, Michel Liquière. Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning. Nonlinear Analysis: Hybrid Systems, 2008, 2 (2), pp.684-694. ⟨10.1016/j.nahs.2006.12.001⟩. ⟨lirmm-00378918⟩
110 View
0 Download

Altmetric

Share

Gmail Facebook X LinkedIn More