Abstract
The notion of stochastic stability is used in game theoretic learning to characterize which joint actions of players exhibit high probabilities of occurrence in the long run. This paper examines the impact of two types of errors on stochastic stability: i) small unstructured uncertainty in the game parameters and ii) slow time variations of the game parameters. In the first case, we derive a continuity result bounds the effects of small uncertainties. In the second case, we show that game play tracks drifting stochastically stable states under sufficiently slow time variations. The analysis is in terms of Markov chains and hence is applicable to a variety of game theoretic learning rules. Nonetheless, the approach is illustrated on the widely studied rule of log-linear learning. Finally, the results are applied in both simulation and laboratory experiments to distributed area coverage with mobile robots.
Original language | English (US) |
---|---|
Title of host publication | 2013 American Control Conference, ACC 2013 |
Pages | 6145-6150 |
Number of pages | 6 |
State | Published - 2013 |
Externally published | Yes |
Event | 2013 1st American Control Conference, ACC 2013 - Washington, DC, United States Duration: Jun 17 2013 → Jun 19 2013 |
Other
Other | 2013 1st American Control Conference, ACC 2013 |
---|---|
Country | United States |
City | Washington, DC |
Period | 06/17/13 → 06/19/13 |
ASJC Scopus subject areas
- Electrical and Electronic Engineering