Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Accéder directement au contenu
Article Dans Une Revue Nonlinear Analysis: Hybrid Systems Année : 2008

Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning

Marc Ricordeau
  • Fonction : Auteur
  • PersonId : 938479
Michel Liquière
  • Fonction : Auteur
  • PersonId : 938480

Résumé

The generalization of policies in reinforcement learning is a main issue, both from the theoretical model point of view and for their applicability. However, generalizing from a set of examples or searching for regularities is a problem which has already been intensively studied in machine learning. Thus, existing domains such as Inductive Logic Programming have already been linked with reinforcement learning. Our work uses techniques in which generalizations are constrained by a language bias, in order to regroup similar states. Such generalizations are principally based on the properties of concept lattices. To guide the possible groupings of similar states of the environment, we propose a general algebraic framework, considering the generalization of policies through a partition of the set of states and using a language bias as an a priori knowledge. We give a practical application as an example of our theoretical approach by proposing and experimenting a bottom-up algorithm.

Dates et versions

lirmm-00378918 , version 1 (27-04-2009)

Identifiants

Citer

Marc Ricordeau, Michel Liquière. Algebraic results and bottom-up algorithm for policies generalization in reinforcement learning. Nonlinear Analysis: Hybrid Systems, 2008, 2 (2), pp.684-694. ⟨10.1016/j.nahs.2006.12.001⟩. ⟨lirmm-00378918⟩
110 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More