ENSAE Paris - École d'ingénieurs pour l'économie, la data science, la finance et l'actuariat

Optimisation

Teacher

DALALYAN Arnak

Department: Statistics

Objective

Course Description

Optimization is a mathematical discipline that studies problems to determine the best values for variables, under certain constraints, to minimize or maximize a functional criterion. This course presents the fundamental concepts, methods, and main algorithms of finite-dimensional optimization.

Optimization heavily relies on convex analysis. Therefore, the course also introduces essential notions and demonstrates important results from this branch of mathematics, which lies at the intersection of linear algebra and nonlinear analysis.

Learning Objectives

By the end of this course, students will be able to:

1. Calculate gradients and Hessians of functions with several variables
2. Determine the extremal points of a function with several variables and identify their nature using the Hessian
3. Establish and solve the Karush-Kuhn-Tucker (KKT) conditions for constrained optimization problems
4. Formulate the dual problem of a differentiable convex optimization problem and use it to solve the KKT conditions
5. Build a gradient descent algorithm to solve an optimization problem

Assessment

- 2/3 of the final grade will be based on the written exam at the end of the semester;
- 1/3 will come from continuous assessment, which is composed of a midterm exam grade (50%), an attendance grade (25%), and a participation grade (25%).

Planning

Course Structure

  1. Typology of optimization problems
  2. Gradient and Hessian. Convexity of differentiable functions
  3. First- and second-order necessary conditions
  4. First- and second-order sufficient conditions
  5. Golden section method and bisection algorithm
  6. Gradient descent: convergence for convex functions
  7. Gradient descent: convergence for strongly convex functions
  8. Newton's method
  9. Optimization under equality constraints: Lagrange multipliers
  10. Lagrange’s Theorem. Applications.
  11. Proof of Lagrange’s Theorem. Second-order conditions.
  12. Optimization under inequality constraints: Karush-Kuhn-Tucker Theorem
  13. General methodology for solving an optimization problem
  14. Duality

References

  • Convex Optimization, S. Boyd and L. Vandenberghe, Cambridge University Press.
  • Lectures on Modern Convex Optimization, A. Nemirovski and A. Ben-Tal, SIAM.
  • Edwin K. P. Chong, Stanislaw H. Zak, An Introduction to Optimization, 4e édition, Volume 76 de la série Wiley en mathématiques discrètes et optimisation, John Wiley & Sons, 2013.
  • Jorge Nocedal, Stephen J. Wright, Numerical Optimization, Springer, 2006.
  • Frederick S. Hillier, Gerald J. Lieberman, Introduction to Operations Research, 10e édition, McGraw-Hill Education, 2015.