How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier
Communication Dans Un Congrès Année : 2002

How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction

Résumé

This paper addresses the problem of cooperation between learning situated agents. We present an agent’s architecture based on a satisfaction measure that ensures altruistic behaviors in the system. Initially these cooperative behaviors are obtained by reaction to local signals emitted by the agents following their satisfaction. Then, we introduce into this architecture a reinforcement learning module in order to improve individual and collective behaviors. The satisfaction model and the local signals are used to define a compact representation of agents’ interactions and to compute the rewards of the behaviors. Thus agents learn to select behaviors that are well adapted to their neighbor’s activities. Finally, simulations of heterogeneous robots working on a foraging problem demonstrate the interest of the approach.
Fichier principal
Vignette du fichier
ecai2002_E0539.pdf (69.37 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

lirmm-00268495 , version 1 (21-03-2023)

Identifiants

  • HAL Id : lirmm-00268495 , version 1

Citer

Jérôme Chapelle, Olivier Simonin, Jacques Ferber. How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction. ECAI 2002 - 15th European Conference on Artificial Intelligence, Jul 2002, Lyon, France. pp.68-78. ⟨lirmm-00268495⟩
129 Consultations
19 Téléchargements

Partager

More