Fast learning algorithms for discovering the hidden structure in data
Date and Time
Thursday, March 28, 2013 - 4:30pm to 5:30pm
Location
Computer Science Small Auditorium (Room 105)
Type
CS Department Colloquium Series
Speaker
Host
Robert Schapire
A major challenge in machine learning is to reliably and automatically
discover hidden structure in data with minimal human intervention. For
instance, one may be interested in understanding the stratification of a
population into subgroups, the thematic make-up of a collection of
documents, or the dynamical process governing a complex time series. Many
of the core statistical estimation problems for these applications are, in
general, provably intractable for both computational and statistical
reasons; and therefore progress is made by shifting the focus to realistic
instances that rule out the intractable cases. In this talk, I'll describe
a general computational approach for correctly estimating a wide class of
statistical models, including Gaussian mixture models, Hidden Markov
models, Latent Dirichlet Allocation, Probabilistic Context Free Grammars,
and several more. The key idea is to exploit the structure of low-order
correlations that is present in high-dimensional data. The scope of the
new approach extends beyond the purview of previous algorithms; and it
leads to both new theoretical guarantees for unsupervised machine learning,
as well as fast and practical algorithms for large-scale data analysis.
Daniel Hsu is a postdoc at Microsoft Research New England. Previously, he was a postdoc with the Department of Statistics at Rutgers University and the Department of Statistics at the University of Pennsylvania from 2010 to 2011, supervised by Tong Zhang and Sham M. Kakade. He received his Ph.D. in Computer Science in 2010 from UC San Diego, where he was advised by Sanjoy Dasgupta; and his B.S. in Computer Science and Engineering in 2004 from UC Berkeley. His research interests are in algorithmic statistics and machine learning.