Princeton University
|
Computer Science 522
|
|
News:
· FINAL is now available. · All students taking this course should join the mailing list. It's also recommended for anyone planning to go to some of the lectures. (I'll announce in advance on the mailing list the topics for each of the more advanced lectures.) · I will assume all students have basic familiarity with NP-completeness, space complexity, diagonalization, and basic notions of discrete probability and linear algebra. See chapters 1-6 and the appendix in the textbook. Email me for any clarifications. |
Professor: Boaz Barak - 405 CS Building. Email: Phone: 609-981-4982 (I prefer email)
Graduate Coordinator: Melissa Lawson - 310 CS Building - 258-5387 mml@cs.princeton.edu
Requirements and grading: Submitting and grading weekly homework assignments (50% of grade), take home final exam (50% of final grade).
Prerequisites: There are no formal prerequisites, but I will assume some degree of mathematical maturity and familiarity with basic notions such as functions, sets, graphs, O notations, and probability on finite sample spaces. See the appendix of the book to brush up on this stuff.
This is a graduate course in computational complexity, including both "classical" results from the last few decades, and very recent results from the last few years.
Complexity theory deals with the power of efficient computation. While in the last century logicians and computer scientists developed a pretty good understanding of the power of finite time algorithms (where finite can be an algorithm that on on a 1000 bit input will take longer to run than the lifetime of the sun) our understanding of efficient algorithms is quite poor. Thus, complexity theory contains more questions, and relationships between questions, than actual answers. Nevertheless, we will learn about some fascinating insights, connections, and even few answers, that emerged from complexity theory research.
Among the questions we will tackle (for various types of computational problems) are:
I plan to include also some topics, including some results from the last couple of years. These include derandomization, expanders and extractors, Reingold's deterministic O(log n)-space algorithm for undirected s-t connectivity and the PCP theorem. A general recurring theme in this course will be the notion of obtaining combinatorial objects with random and pseudorandom properties.
Perhaps the question that will occur to you after attending this course is "How is it that all these seemingly intelligent people have been working on this for several decades and have not managed to prove even some ridiculously obvious conjectures?". The answer is that we need your help to solve some of these problems, and get rid of this embarrassing situation.
Our main textbook will be the upcoming book Computational Complexity: A Modern Approach by Sanjeev Arora and me. Drafts of the book will be available from Pequod Copy. Whenever presenting material that is not in this book, I will provide references to the relevant research papers or other lecture notes.
Another upcoming book you might want to look at is Computational Complexity: A Conceptual Perspective by Oded Goldreich.
Some lecture notes of similar courses: COS-522 Spring 07 COS-522 Spring 06 Sanjeev Arora, Rudich and Blum, Madhu Sudan, Luca Trevisan, Russel Impagliazzo (2), Chris Umans, Oded Goldreich (see also his texts on computational complexity) Feige & Raz, Moni Naor, Valentine Kabanets, Muli Safra (2),
Pseudorandomness courses: Salil Vadhan, Luca Trevisan, David Zuckerman, Oded Goldreich, Venkat Guruswami
PCP and hardness of approximation: Uri Feige, Guruswami and Odonnell,
Other courses: Sanjeev Arora: theorist's toolkit, Madhu Sudan: essential coding theory, Linial and Wigderson: expander graphs
Due | Checker(s) | |
HW1 (latex source) | Feb 11 | Suchant Sachdeva |
HW2 (latex source) | Feb 18 | Anuradha Venugopalan |
HW3 (latex source) | Feb 25 | Luke Friedman |
HW4 (latex source) | Mar 4 | Srdjan Krstic |
HW5 (latex source) | Mar 11 | Aaron Potechin |
HW6 (latex source) | Mar 25 | Aditya Bhaskara |
HW7 (latex source) | Apr 1 | Rong Ge |
HW8 (latex source) | Apr 8 | Yury Pritykin |
HW9 (latex source) | Apr 15 | Aravindan Vijayaraghavan |
HW10 (latex source) | April 22 | Shi Li |
HW11 (latex source) | April 29 | --- |
February 2009 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | March 2009 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | April 2009 Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
Readings on probabilistic algorithms: Goldreich's text on randomized complexity classes. The following books are recommended for a more in depth look at discrete probability and randomized computation:
General reading on complexity, P vs NP: New Yorker article on Alan Turing.
Oded Goldreich's text on computational tasks and models.
The following surveys are highly recommended:
Expander graphs: See the following excellent book by Hoory, Linial and Wigderson on expander graphs.
Equivalence of algebraic and combinatorial expansion In a sequence of three blog posts Luca Trevisan
discusses the equivalence
of the two definitions of expansion (also known as "Cheeger's Inequality", and as discussed is a discrete version
of a continuous theorem by Cheeger obtained by Alon-Milman, Alon, and Dodziuk) and shows a proof of the easy part
and the hard part of this inequality.
He also discusses the relation between the maximum cut and
the smallest eigenvalue.
James Lee also had a blog post on the proof of the hard part
of Cheeger's Inequality.
Additional reading: You can find the story of IP=PSPACE in the following
entertaining survey: Email and the Unexpected Power of Interaction by Laci Babai.
. See Goldreich's text on the
proof that IP[k] is in AM[k+3].
Goldreich's text on IP=PSPACE. Trevisan's
lecture notes on IP=PSPACE
Additional reading: Ryan O'Donnel wrote up a nice survey
on history of the PCP Theorem in the lecture notes for his course
with Guruswami on PCP and hardness of approximation
We follow the proof of the PCP theorem in this paper by Irit Dinur. A variant of this proof is also described in these lecture notes by Guruswami and Odonnell (these two sources follow a slightly different approach, and we'll also do some things a bit different from both these sources).
Additional reading: See the following sources for more advanced related material:
Additional reading on Hastad's PCP, Fourier: Paper by Kushilevitz and Mansour. See also Trevisan's lecture on Goldreich-Levin algorithm. In the context of the PCP itself, particularly worthwhile topics to look at are Hastad's 3-query PCP (chapter 19 in textbook), Parallel Repetition lemmas (Guruswami-Odonnel), Free-bit complexity (Khot lecture 7, Hastad-Wigderson and refs there) and the Unique-games conjecture (Khot tutorial).
Additional reading: Raz's paper,
Holenstein's paper
Additional reading: Lectures 11 and
12 from
Trevisan's pseudorandomness course.
Goldreich's text on pseudorandom
generators (relevant materials is until page 22).
Additional reading: The hardcore lemma is from paper "Hard-core distributions for somewhat hard problems" of Russell Impagliazzo (see also his wikipedia entry). The XOR lemma has several different proofs with varying parameters, see this survey by Goldreich, Nisan and Wigderson.
A derandomized version of the XOR lemma, that given a function on n bits only needs to move to a function on O(n+k) bits to get hardness similar to the hardness the original version's with k repetitions (and hence nk bits) was given in this paper by Impagliazzo and Wigderson. In particular, using what we've seen this paper shows how to get BPP=P from functions with 1-1/n hardness for exponential circuits. (We'll show how to get functions like that from functions that are worst-case hard next time).
I highly recommend this survey
by Valentine Kabanets on derandomization. It contains brief
descriptions and pointers to many of the latest results and exciting research directions of this field.
Getting to BPP=P: The "XOR Lemma free" approach to getting BPP=P was given in this paper by Sudan, Trevisan and Vadhan (STV). As mentioned before, there's a previous alternative approach by Impagliazzo and Wigderson using a "XOR Lemma on steroids". There's even a third "NW generator free" approach by Shaltiel and Umans (see below).
Hardness vs. randomness tradeoff: The results shown in class generalize to a tradeoff between the assumption on the circuit size required to compute functions in E and the resulting time to derandomize BPP. However, the currently known approach to get an optimal tradeoff (by this we mean optimal w.r.t. to "black-box" proofs) is somewhat different (and in particular uses error correcting codes but not the NW generator). This is obtained in the following two papers by Shaltiel and Umans and Umans. You can also see a PowerPoint presentation by Ronen Shaltiel on this topic.
Uniform derandomization: The result that either BPP=EXP or there's a non-trivial subexponential derandomization of BPP is from this Impagliazzo-Wigdersion 98 paper, but a more general and perhaps better proof can be found in this paper by Vadhan and Trevisan. The results that even a uniform derandomization require circuit lower bounds come from these two papers by Impagliazzo-Kabanets-Wigderson and Impagliazzo-Kabanets
Randomness extractors: Another topic we did not touch is randomness extractors which are used not to derandomize BPP but to execute probabilistic algorithms without access to truly independent and uniform coin tosses. The following survey by Ronen Shaltiel is a good starting point for information on this topic. See also the following presentation by Salil Vadhan.
More resources on derandomization and pseudorandomness: As you can see, one could make a whole course out of the
topics we did not cover on pseudorandomness, and indeed there several such courses with excellent lecture notes were
given. Some recommended links are: Shaltiel
Trevisan Zuckerman
Goldreich Vadhan