02-18
Large AI Model Lecture Series: Just a Parrot? How Language Models Learn, Reason, and Self-Improve

Image
Sanjeev Arora

Large Language Models (LLMs) were initially dismissed as “stochastic parrots,” mere mimics of human text. This non-technical talk draws upon current research to challenge that simplistic view, revealing how today’s LLMs learn, reason, and even engage in self-improvement. We discuss mechanisms that enable these surprising capabilities, thus moving LLMs beyond the “next-word predictor” stereotype. Understanding this rapid pace of advances as well as their implications for learning and work is crucial for students, educators, and researchers today

Bio : Sanjeev Arora is the Charles C. Fitzmorris Professor in Computer Science. His is the founding director of Princeton Language and Intelligence. He joined Princeton in 1994 after earning his Ph.D. from UC Berkeley. Professor Arora won the Fulkerson Prize in Discrete Mathematics in 2012, the ACM Prize in Computing in 2011, EATCS-SIGACT Gödel Prize (co-winner) twice, in 2001 and 2010, the Packard Fellowship (1997) and the ACM Doctoral Dissertation Award (1995). He is a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and a Fellow of the ACM. He was a plenary lecturer at the International Congress of Mathematicians in 2018. He was appointed a Simons Foundation investigator in 2012, and also won best paper awards in IEEE FOCS 2010 and ACM STOC 2004. Professor Arora was the founding director and lead PI at the NSF-funded Center for Computational Intractability in 2008-13.

Date and Time
Tuesday February 18, 2025 4:30pm - 5:30pm
Location
Friend Center 101
Speaker
Sanjeev Arora, from Princeton University
Host
PLI

Contributions to and/or sponsorship of any event does not constitute departmental or institutional endorsement of the specific program, speakers or views presented.

CS Talks Mailing List