|
COS 323 - Computing for the Physical and Social Sciences
|
Fall 2013
|
Exam 1 Study Guide
Thursday, Oct. 24
The exam will be held in class on the 24. If you absolutely cannot make it,
please discuss the situation with your academic advisor or dean, and
contact Prof. Rusinkiewicz to arrange to take the make-up exam.
No books, notes, or electronic devices may be used during the exam.
Topics covered:
General
- The focus will be on the content of the lectures and assignments.
The textbook contains more advanced material that is important, but will
not be covered in the exam.
- Most questions will be short answer
Number representation, accuracy, precision
- Understand components of floating-point representation (mantissa, exponent)
- Understand relative vs. absolute accuracy
For each of bisection, secant, false position, Newton-Raphson:
- When can you (attempt to) use it?
- When is it guaranteed to converge?
- How quickly does error decrease?
- Be able to trace through 1 or 2 iterations by hand (in 1D)
For each of Golden Section Search, Newton, Steepest Descent, Conjugate
Gradient, Nelder-Mead Simplex, Simulated Annealing:
- When can you (attempt to) use it?
- When is it guaranteed to converge?
- Which methods are preferred under which circumstances?
- Be able to trace through 1 or 2 iterations of GSS and Newton by hand (in 1D)
For each of Gaussian Elimination, LU, Cholesky, forward/backsubstitution,
and tridiagonal solvers:
- What is the running time? (Big-O notation)
- When can you use it and when will it work?
- Which methods are preferred under which circumstances?
Describe partial and full pivoting, and why they are necessary
Sparse matrix representation (lecture 6)
- Be able to explain the compressed representation
- Be able to work through a matrix/vector multiply by hand
- Explain how to use conjugate gradients to solve sparse linear systems
Least squares
- Why is least-squares fitting used? Why might it be a bad idea to use it?
- Understand (and be able to derive) normal equations
- Describe benefits and drawbacks of Cholesky vs QR vs SVD for linear
least squares
- Describe the benefits and drawbacks of transforming nonlinear least-squares
problems to linear
Gauss-Newton and Levenberg-Marquardt:
- When can/should they be used?
- Describe why Levenberg-Marquardt might be more stable than Gauss-Newton
Robust regression:
- Explain why outliers are a problem for least squares
- Describe M-Estimation, Iteratively Reweighted Least Squares, RANSAC
SVD:
- Understand dimensions and roles of U, W ($\Sigma$ in the book), and V in SVD
- Explain how to detect underconstrained least-squares problems with SVD
- Explain how to use SVD for dimensionality reduction
Last update
16-Oct-2013 16:06:28
smr at princeton edu