Princeton University |
Computer Science 402 |
|
Numbers in brackets under "readings" refer to chapters or sections of Russell & Norvig.
# |
Date |
Topic |
Readings (required) |
Other (optional) readings and links |
1 | Th 9/11 | General introduction to AI. |
[1]
AI Growing Up by James Allen (but skip or skim page 19 to end). |
AAAI website with
LOTS of readings on AI in general, AI in the news, etc. Robocup website. The simulation league movies can be found here. (Click on "results", and then on the "F" next to any match.) Four-legged robot league movies can be found here. |
2 | Tu 9/16 | Uninformed (blind) search | [3.1-3.5] | |
3 | Th 9/18 | Informed (heuristic) search | [4.1-4.2] | |
4 | Tu 9/23 | Local search; searching in games | [4.3], [6] (but okay to skip [6.5]) | play checkers with Chinook |
5 | Th 9/25 | Propositional logic | [7.1-7.4] | |
6 |
Fr 9/26 OR Tu 9/30 |
Theorem proving and the resolution algorithm | [7.5] | |
7 | Th 10/2 | Practical methods of solving CNF sentences | [7.6] | |
8 | Tu 10/7 |
Applications of solving CNF sentences, including planning ; Cursory look at first-order logic; Uncertainty and basics of probability |
[11.1, 11.5] ; [8.1-8.3] (okay to skim these); [13.1-13.4] |
|
9 | Th 10/9 | Guest lecture: Gilbert Harman, Professor of Philosophy, on "AI and Philosophy" |
[26] lecture notes |
|
10 | Tu 10/14 | Independence and Bayes rule | [13.5-13.6] | "What is the chance of an earthquake?" (article on interpreting probability, by Freedman & Stark) |
11 | Th 10/16 | Bayesian networks: semantics and exact inference | [14.1-14.4] | brief tutorial on Bayes nets (and HMM's), with links for further reading |
12 | Tu 10/21 | Approximate inference with Bayesian networks | [14.5] | |
13 14 |
Th 10/23 Tu 11/4 |
Uncertainty over time (temporal models; HMM's) Kalman filters |
[15.1-15.3] formal derivations (optional) [15.4] |
|
15 | Th 11/6 | DBN's; particle filters; speech recognition | [15.5-15.6] |
The particle filtering demo came from
here on
Sebastian Thrun's website. The sample speech signal came from here. "I'm sorry Dave, I'm afraid I can't do that" (article on natural language processing by L. Lee) |
16 | Tu 11/11 | Finish speech recognition; decision theory; begin Markov decision processes | [16.1-16.3]; [17.1] | |
17 18 |
Th 11/13 Tu 11/18 |
Markov decision processes: Bellman equations, value iteration, policy iteration | [17.2-17.4] | Sutton & Barto's excellent book on reinforcement learning and MDP's |
19 | Th 11/20 |
Machine Learning Decision trees |
[18.1-18.2] [18.3] |
|
20 | Tu 11/25 | Computational learning theory |
[18.5] generalization error theorem proved in class |
original "Occam's Razor" paper |
21 | Tu 12/2 | Boosting |
[18.4] boosting slides face slide training error proof |
boosting overview paper |
22 | Th 12/4 | Support-vector machines | [20.6] | tutorial on SVM's |
23 | Tu 12/9 |
Neural networks Learning Bayes net and HMM parameters |
[20.5] [20.1-20.3] |
A demo of LeNet, a neural network for optical-character recognition, is available here. Click the links on the left to see how it does on various inputs. The figure shows the activations of various layers of the network, where layer-1 is the deepest. (For more detail, see the papers on the LeNet website, such as this one.) |
24 | Th 12/11 | Reinforcement learning in MDP's | [21.1-21.4] |
Sutton &
Barto's excellent book on reinforcement learning and MDP's Learning to play keepaway in robocup soccer using reinforcement learning. Scroll down to find flash demos. |