UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Convergence to Nash in the potential Linear Quadratic games and accelerated learning in games Alian Porzani, Alireza

Abstract

Game theory and online optimization have a close relationship with each other. In some literature, online optimization has been employed for solving game theory problems. Many accelerated algorithms are proposed for offline optimization problems. However, to the best of our knowledge, there is not enough work done to accelerate zero-order online optimization. The goal is to propose a Nesterov accelerated online algorithm with the hope that it will converge to the Nash, with a fast convergence rate in Cournot games and Quadratic games. It is desired that this online algorithm also minimize the regret of a sequence of functions for both zero-order and first-order feedback. In potential Linear Quadratic (LQ) games, we also study the convergence of the policy gradient algorithms, a class of conventional reinforcement learning methods. LQ games have applications in engineering. It has been shown that using policy gradient algorithms by agents does not guarantee convergence to the Nash equilibrium. However, in the LQR problem, which is essentially a one-player LQ game, the policy gradient converges to the optimum. In this work, we show that using policy gradient algorithms leads to convergence to Nash equilibrium in potential LQ games. Additionally, we identify the characteristics of potential games in both open-loop and closed-loop settings. We will demonstrate that the class of closed-loop potential games is generally trivial, and if we put restrictions on players' actions, we can have non-trivial potential games too.

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International