Final Exam Information
|
Fall 2004
|
The final will be held at 1:30pm on Thursday, January 13 in Computer Science
102. This will be a three-hour exam for which you will be given 3.5 hours.
If you do better on the exam than the homeworks, then the final exam will be
worth 35% of your final grade . Otherwise, if you did better on the
homeworks than the exam, it will be worth only 25% of your grade.
What to bring
The exam will be closed book. However, you may bring a one-page
"cheat sheet" consisting of a single, ordinary 8.5"x11"
blank sheet of paper with whatever notes you wish written upon it. You may
write on both the front and the back. However, it must be handwritten (not
computer generated or photocopied) in your own handwriting.
Also, be sure to bring a calculator.
You may not use the text book, your notes, a computer or any other materials
during the exam.
Last year's exam
Here is last year's final exam. This year's
exam will be largely of the same format.
What will be covered
In principle, anything covered in lecture or in the assigned readings is
"fair game". However, realistically, you can expect that the
emphasis will be placed on those same topics that were emphasized in lecture.
Below is a list of topics, concepts and algorithms that you should be
familiar with. I have attempted to make this an exhaustive list, although
I cannot guarantee that I did not miss an item or two. See also last
year's exam for a sense of what will be covered.
- search and problem solving
- properties of search algorithms (completeness, optimality, time and space
efficiency)
- uninformed (blind) search
- BFS
- uniform-cost search
- DFS
- depth-limited search
- IDS
- bidirectional search
- informed (heuristic) search
- best-first search
- A*
- heuristic functions (consistent, admissible)
- local search
- objective function
- hill climbing
- simulated annealing
- genetic algorithms
- adversarial search
- minimax algorithm
- alpha-beta pruning
- evaluation functions
- tricks for speeding up game playing programs
- logic
- elements of a logic (semantics, syntax, models, etc.)
- entailment
- propositional logic (symbols, literals, etc.)
- horn clauses/sentences
- inference algorithms
- soundness and completeness
- model checking
- inference rules
- resolution
- DPLL
- walksat
- formulating problems as satisfiability instances
- first-order logic
- probability
- events and atomic events
- random variables
- distribution
- joint distribution
- conditional probability
- marginal probability
- independence
- conditional independence
- Bayes' rule
- expected value
- conditional expected value
- Naive Bayes algorithm
- Bayesian networks
- meaning and interpretation
- Markov blanket
- inference
- variable elimination
- direct sampling, rejection sampling, likelihood weighting
- MCMC
- Markov chains
- temporal models
- states, observations, evidence, etc.
- HMM's
- belief state
- filtering
- prediction
- smoothing (forward-backwards algorithm)
- Viterbi
- Kalman filters
- DBN's
- particle filters
- speech recognition
- phones, phonemes, frames, etc.
- triphone model
- three-state phone model
- acoustic model
- language model
- bigram/trigram model
- pronunciation model
- utility
- MDP's
- states, rewards, actions, etc.
- policy
- optimal policy
- utility
- discounted reward
- Bellman equations
- value iteration
- policy evaluation
- policy iteration
- convergence properties
- policy improvement theorem
- POMDP's
- learning
- types of learning problems (supervised, regression, classification,
etc.)
- Occam's razor
- conditions for effective learning
- features (a.k.a. attributes or dimensions)
- class (a.k.a. label or output)
- instances and examples
- training error, test error, generalization error
- hypotheses
- overfitting
- theory - PAC bounds
- learning algorithms
- decision trees
- how to grow - impurity measures
- pruning
- AdaBoost
- weak hypotheses and weak learning
- SVM's
- neural nets
- gradient descent and backprop
- learning parameters of a Bayes net
- principles:
- maximum likelihood
- MAP
- full Bayesian
- EM
- learning in MDP's (reinforcement learning)
- model-based approach (adaptive dynamic programming)
- exploration versus exploitation
- model-free approach
- philosophy / future of AI
- Turing test
- Chinese room experiment
Good luck!