How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction - LIRMM - Laboratoire d’Informatique, de Robotique et de Microélectronique de Montpellier Access content directly
Conference Papers Year : 2002

How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction

Abstract

This paper addresses the problem of cooperation between learning situated agents. We present an agent’s architecture based on a satisfaction measure that ensures altruistic behaviors in the system. Initially these cooperative behaviors are obtained by reaction to local signals emitted by the agents following their satisfaction. Then, we introduce into this architecture a reinforcement learning module in order to improve individual and collective behaviors. The satisfaction model and the local signals are used to define a compact representation of agents’ interactions and to compute the rewards of the behaviors. Thus agents learn to select behaviors that are well adapted to their neighbor’s activities. Finally, simulations of heterogeneous robots working on a foraging problem demonstrate the interest of the approach.
Fichier principal
Vignette du fichier
ecai2002_E0539.pdf (69.37 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

lirmm-00268495 , version 1 (21-03-2023)

Identifiers

  • HAL Id : lirmm-00268495 , version 1

Cite

Jérôme Chapelle, Olivier Simonin, Jacques Ferber. How Situated Agents can Learn to Cooperate by Monitoring their Neighbors'Satisfaction. ECAI 2002 - 15th European Conference on Artificial Intelligence, Jul 2002, Lyon, France. pp.68-78. ⟨lirmm-00268495⟩
122 View
9 Download

Share

Gmail Mastodon Facebook X LinkedIn More