Princeton University |
Computer Science 402 |
|
Numbers in brackets under "readings" refer to chapters or sections of Russell & Norvig.
# |
Date |
Topic |
Readings (required) |
Other (optional) readings and links |
1 | Tu 9/18 | General introduction to AI. |
[1]
AI Growing Up by James Allen (but skip or skim page 19 to end). |
AAAI website with
LOTS of readings on AI in general, AI in the news, etc. Robocup website. The simulation league movies can be found here. (Click on "results", and then on the "F" next to any match.) Four-legged robot league movies can be found here. |
2 | Th 9/20 | Uninformed (blind) search | [3.1-3.5] | |
3 | Tu 9/25 | Informed (heuristic) search | [4.1-4.2] | |
4 | Th 9/27 | Local search; searching in games | [4.3], [6] (but okay to skip [6.5]) | play checkers with Chinook |
5 | Tu 10/2 | Propositional logic | [7.1-7.4] | |
6 | Th 10/4 | Theorem proving and the resolution algorithm | [7.5] | |
7 | Tu 10/9 | Other methods of solving CNF sentences | [7.6] | |
8 | Th 10/11 |
Applications of solving CNF sentences, including planning ; Cursory look at first-order logic; Uncertainty and basics of probability |
[11.1, 11.5] ; [8.1-8.3] (okay to skim these); [13.1-13.4] |
|
9 | Tu 10/16 | Independence and Bayes rule | [13.5-13.6] | |
10 | Th 10/18 | Bayesian networks: semantics and exact inference | [14.1-14.4] | brief tutorial on Bayes nets (and HMM's), with links for further reading |
11 | Tu 10/23 | Approximate inference with Bayesian networks | [14.5] | |
12 13 |
Th 10/25 Tu 11/6 |
Uncertainty over time (temporal models; HMM's) Kalman filters |
[15.1-15.3] formal derivations (optional) [15.4] |
|
14 | Th 11/8 | DBN's; particle filters; speech recognition | [15.5-15.6] |
The particle filtering demo came from
here on
Sebastian Thrun's website. "I'm sorry Dave, I'm afraid I can't do that" (article on natural language processing by L. Lee) |
15 | Tu 11/13 | Decision theory; begin Markov decision processes | [16.1-16.3]; [17.1] | |
16 17 |
Th 11/15 Tu 11/20 |
Markov decision processes: Bellman equations, value iteration, policy iteration | [17.2-17.4] | Sutton & Barto's excellent book on reinforcement learning and MDP's |
18 | Tu 11/27 |
Machine Learning Decision trees |
[18.1-18.2] [18.3] |
|
19 | Th 11/29 | Computational learning theory |
[18.5] generalization error theorem proved in class |
original "Occam's Razor" paper |
20 | Tu 12/4 | Guest lecture: Gilbert Harman, Professor of Philosophy, on "Philosophy of Artificial Intelligence" |
[26] lecture slides |
|
21 | Th 12/6 | Boosting |
[18.4] boosting slides face slide training error proof |
boosting overview paper |
22 | Tu 12/11 | Support-vector machines | [20.6] | tutorial on SVM's |
23 | Th 12/13 |
Neural networks Learning Bayes net and HMM parameters |
[20.5] [20.1-20.3] |
A demo of LeNet, a neural network for optical-character recognition, is available here. Click the links on the left to see how it does on various inputs. The figure shows the activations of various layers of the network, where layer-1 is the deepest. (For more detail, see the papers on the LeNet website, such as this one.) |
24 | Mo 12/17 | Reinforcement learning in MDP's | [21.1-21.4] |
Sutton &
Barto's excellent book on reinforcement learning and MDP's Learning to play keepaway in robocup soccer using reinforcement learning. Scroll down to find flash demos. |