The objective of this course is to introduce students to the fundamentals of numerical computations. The course focuses on numerical methods for nonlinear equations, optimization, interpolation and approximation, differentiation and integration, ordinary differential equations, boundary-value problems, and Fourier transform.
Required Textbook
- Michael T.
Heath, Scientific
Computing: An Introductory Survey, Revised
2nd Edition, SIAM, 2018. Chapters 1, 5--12
Supplementary Materials
- Gilbert
Strang, Computational Science
and Engineering, Wellesley Cambridge
Press, 2007. Chapters 3 & 4.
- A. Quarteroni, R. Sacco, F. Saleri, Numerical Mathematics, Texts in Applied Math, Vol 37, Springer, 2007.
- Randall J. LeVeque, Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems, SIAM, 2007.
Prerequisite/Co-requisite
- Prior knowledge of linear algebra and calculus (at the level of AMS 510).
- Basic skills of UNIX systems and programming.
Learning Objectives
The objective of this course is to introduce the fundamentals of numerical computations. The course focuses on numerical methods for nonlinear equations, optimization, interpolation and approximation, differentiation and integration, ordinary differential equations, and boundary-value problems. Key leaning outcomes include the following:- Build understanding of fundamentals of numerical approximations
- classification of sources of errors
- effect of floating-point arithmetic
- accuracy and stability
- Master concepts and numerical methods for solving nonlinear equations
- methods for nonlinear equations in 1-D: interval bisection method, fixed-point iteration, Newton's method, secant method
- methods for nonlinear equations in n-D: Newton's method, Newton-like method
- sensitivity, convergence rates, and stopping criteria
- Build fundamental understanding of concepts and numerical methods for optimization
- unconstrained vs. constrained optimization, global vs. local minimum, convexity, optimality conditions
- algorithms for unconstrained optimization in 1-D and n-D: golden section search, Newton's method, Quasi-Newton methods, steepest descent, and conjugate gradient
- algorithms for constrained optimization: Lagrange multiplier
- Build fundamental understanding of interpolation and approximation
- interpolation versus approximation, basis functions, convergence, Taylor polynomial
- polynomial interpolation, piecewise polynomial interpolation, orthogonal polynomial interpolation, lease squares approximations
- trigonometric interpolation
- Master concepts and numerical methods for numerical integration and differentiation
- Newton-Cotes rules, Gaussian quadrature rules, change of interval
- derivation with method of undetermined coefficients and orthogonal polynomials
- finite difference approximation, forward difference, backward difference, and centered difference
- Master basic numerical methods for initial-value and boundary-value problems
- stability of solutions of ODEs; global error vs. local error; stiffness; explicit vs. implicit methods; analysis of stability
- basic algorithms/schemes and their derivations: Euler's methods (forward and backward); trapezoid method; Heun's method; fourth-order Runge-Kutta method
- finite-difference methods and finite element methods
- Demonstrate programming skills for numerical methods