Hidden Grammar: Advances in Data-Driven Models of Language
In this talk we adopt the premise that unsupervised learning will, in the long run, be the way forward for learning computational models of language cheaply. We focus on dependency syntax learning without trees, beginning with the classic EM algorithm and presenting several ways to alter EM for drastically improved performance using crudely represented "knowledge" of linguistic universals. We then present more recent work in the empirical Bayesian paradigm, where we encode our background knowledge as a prior over grammars, applying inference to obtain hidden structure. Of course, "background knowledge" is still human intuition. We argue, however, that by representing this knowledge compactly in a prior distribution--far more compactly than the many decisions made in building treebanks--we can experimentally explore the connection between proposed linguistic universals and unsupervised learning.
This talk includes discussion of joint work with Shay Cohen, Dipanjan Das, Jason Eisner, Kevin Gimpel, Andre Martins, and Eric Xing.