Affiliations: Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, Italy
Correspondence:
[*]
Corresponding author: Andrea Celli, Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Piazza Leonardo da Vinci 32, Milano, Italy. E-mail: [email protected].
Abstract: In imperfect-information games, a common assumption is that players can perfectly model the strategic interaction and always maintain control over their decision points. We relax this assumption by introducing the notion of limited-control repeated games. In this setting, two players repeatedly play a zero-sum extensive-form game and, at each iteration, a player may lose control over portions of her game tree. Intuitively, this can be seen as the chance player hijacking the interaction and taking control of certain decision points. What subsequently happens is no longer controllable–or even known–by the original players. We introduce pruned fictitious play, a variation of fictitious play that can be employed by the players to reach an equilibrium in limited-control repeated games. We motivate this technique with the notion of limited best response, which is the key step of the learning rule we employ. We provide a general result on the probabilistic guarantees of a limited best response with respect to the original game model. Then, we experimentally evaluate our technique and show that pruned fictitious play has good convergence properties.