Aspiration learning in coordination games

Georgios C. Chasparis, Jeff S. Shamma, Ari Arapostathis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

10 Scopus citations

Abstract

We consider the problem of distributed convergence to efficient outcomes in coordination games through payoff-based learning dynamics, namely aspiration learning. The proposed learning scheme assumes that players reinforce well performed actions, by successively playing these actions, otherwise they randomize among alternative actions. Our first contribution is the characterization of the asymptotic behavior of the induced Markov chain of the iterated process by an equivalent finite-stateMarkov chain, which simplifies previously introduced analysis on aspiration learning. We then characterize explicitly the behavior of the proposed aspiration learning in a generalized version of so-called coordination games, an example of which is network formation games. In particular, we show that in coordination games the expected percentage of time that the efficient action profile is played can become arbitrarily large.

Original languageEnglish (US)
Title of host publication2010 49th IEEE Conference on Decision and Control, CDC 2010
Pages5756-5761
Number of pages6
DOIs
StatePublished - Dec 1 2010
Event2010 49th IEEE Conference on Decision and Control, CDC 2010 - Atlanta, GA, United States
Duration: Dec 15 2010Dec 17 2010

Publication series

NameProceedings of the IEEE Conference on Decision and Control
ISSN (Print)0191-2216

Other

Other2010 49th IEEE Conference on Decision and Control, CDC 2010
CountryUnited States
CityAtlanta, GA
Period12/15/1012/17/10

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Modeling and Simulation
  • Control and Optimization

Fingerprint Dive into the research topics of 'Aspiration learning in coordination games'. Together they form a unique fingerprint.

Cite this