03-03
Theoretical Foundations for Multi-Agent Learning

Image
Noah Golowich

As learning algorithms become increasingly capable of acting autonomously, it is important to better understand the behavior that results from their interactions. For example, a pervasive challenge in multi-agent learning settings, which spans both theory and practice and dates back decades, has been the failure of convergence for iterative algorithms such as gradient descent. Accordingly, a longstanding central question with broad relevance is: how quickly can we compute solution concepts, i.e., equilibria, in multi-agent settings?

I will discuss results which address this question at a variety of levels, starting from foundational settings involving normal-form games and building up to complex problems such as multi-agent reinforcement learning which more aptly model realistic situations. First, I will present a result establishing a near-optimal convergence rate for a simple online learning algorithm in normal-form games, resolving a decade-long line of work which gave suboptimal bounds. I will then discuss a new algorithm for minimizing swap regret exponentially faster than previous approaches. Our algorithm allows us to answer several open questions, such as by establishing the first PTAS for correlated equilibria in extensive-form games. 

Beyond contending with agents' differing incentives, the increasing use of machine learning algorithms presents other challenges, such as the proliferation of AI-generated content. In the latter part of the talk, I will discuss an approach to detect such content via watermarking. Our watermarking scheme is the first to embed a watermark in a language model's output in a way which only leads to negligible changes in the distribution of the output but which is robust to adversarial edits. 

Bio: Noah Golowich is a PhD Student at the Massachusetts Institute of Technology, advised by Constantinos Daskalakis and Ankur Moitra. His research interests lie in theoretical machine learning, with a particular focus on the connections between multi-agent learning, game theory, online learning, and theoretical reinforcement learning. He has also worked on CS theory more broadly, including on problems in differential privacy, learning theory, and watermarking. He was supported by a Fannie & John Hertz Foundation Fellowship and an NSF Graduate Fellowship.


To request accommodation for a disability, please contact Emily Lawrence, emilyl@cs.princeton.edu, at least one week prior to the event.

Date and Time
Monday March 3, 2025 12:30pm - 1:30pm
Location
Computer Science Small Auditorium (Room 105)
Host
Elad Hazan

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List