Optimeringslära och systemteori
SF2852 Optimal Control, 2018, 7.5hp.
Some material from the lectures can be found on this
Course home page address:
Examiner and lecturer:
phone: 790 8440
phone: 790 6294.
Optimal control is the problem of
determining the control function for a dynamical system to minimize a
performance index. The subject has its roots in the calculus of
variations but it evolved to an independent branch of applied
mathematics and engineering in the 1950s. The rapid development of the
subject during this period was due to two factors. The first are two
key innovations, namely the maximum principle by L. S. Pontryagin and
the dynamic programming principle by R. Bellman. The second was the
space race and the introduction of the digital computer, which led to
the development of numerical algorithms for the solution of optimal
control problems. The field of optimal control is still very active
and it continues to find new applications in diverse areas such as
robotics, finance, economics, and biology.
The goal of the course is to provide an understanding of the main
results in optimal control and how they are used in various
applications in engineering, economics, logistics, and biology. After
the course you should be able to
- describe how the dynamic programming principle works (DynP) and apply it to discrete optimal control problems over finite and infinite time horizons,
- use continuous time dynamic programming and the associated
Hamilton-Jacobi-Bellman equation to solve linear quadratic control
- use the Pontryagin Minimum Principle (PMP) to solve optimal
control problems with control and state constraints,
- use Model Predictive Control (MPC) to solve optimal control
problems with control and state constraints. You should also be able
understand the difference between the explicit and implicit MPC
control and explain their respective advantages,
- formulate optimal control problems on standard form from
specifications on dynamics, constraints and control objective. You
should also be able to explain how various control objectives affect
the optimal performance,
- explain the principles behind the most standard algorithms for
numerical solution of optimal control problems and use Matlab to solve
fairly simple but realistic problems.
For the highest grade you should be able to integrate the tools you have learnt during the course and apply them to more complex problems. In particular you should be able to
- explain how PMP and DynP relates to each other and know their
respective advantages and disadvantages. In particular, you should be
able to describe the difference between feedback control versus open
loop control and you should use be able to compare PMP and DynP with
respect to computational complexity.
- explain the mathematical methods used to derive the results and
combine them to derive the solution to variations of the problems
studied in the course.
Dynamic Programming Discrete
dynamic programming, principle of optimality,
Hamilton-Jacobi-Bellman equation, verification theorem.
Pontryagin minimum principle Several versions of
Pontryagin Minimum Principle (PMP) will be discussed.
Infinite Horizon Optimal Control Optimal control over an
infinite time horizon, stability, LQ optimal control.
Model Predictive Control Explicit and implicit model
Applications Examples from economics, logistics,
aeronautics, and robotics will be discussed.
Computational Algorithms The most common methods for numerical
solution of optimal control problems are presented.
The required course material consists of the following lecture and
exercise notes on sale at Kårbokhandeln.
- Ulf Jönsson et. al. Optimal Control, Lecture notes, KTH.
- Peter Ögren et. al. Exercise Notes on Optimal Control , KTH.
Supplementary material will
be handed out during the course.
The student is required to have passed the course optimization SF1841
or a course with similar content. The student should hence be familiar
with concepts and theory for optimization: linear, quadratic, and
nonlinear optimization; optimality conditions, lagrangian relaxation
and duality theory. Familiarity with systems theory and state space is
not required but recommended.
The course requirements consist of an obligatory final written
examination. There are also three optional homework sets that we
strongly encourage you to do. The homework sets give you bonus credits
in the examination.
PhD course SF3852
It is possible to read this course as a PhD level course. For this, an
extra project and at least a B on the exam is required. If you are
interested in this option, email me that you would like to take this course and include a project that you are interested in working on
(at the latest April 17).
Homework set 0: This homework set provides some review of systems theory and optimization as well as a Matlab exercise that use the toolbox CVX. I recommend that everyone does problem 2. Homework set 0 does not give bonus points to the exam, however, you can get feedback on your solutions if you hand it in before the deadline.
Each of the homework sets 1-3 consists of three-five problems. The first two-three problems are methodology problems where you practice on the topics of the course and apply them to examples. Among the last two problems, one will focus on more theoretical nature and helps you to understand the mathematics behind the course. It can, for example, be to derive an extension of a result in the course or to provide an alternative proof of a result in the course. The other will focus on implementation and the student is required to make a Matlab program that solve a problem numerically.
Each successfully completed homework set gives you maximally 2 bonus points for the exam. The bonus is only valid during the year it is acquired. The exact requirements will be posted on each separate homework set. The homework sets will be posted on the homepage roughly two weeks before the deadline. You may email the solutions to the homework. If you choose to do so, the solution should be submitted as one pdf prepared in LaTeX or comparable software.
- Homework 0: This homework set covers some basic systems theory and optimization. (Due September 4, at 10.14).
Here is the first homework set:
- Homework 1: This homework set covers problems on discrete dynamic programming and model predictive control. (Due on September 13, at 08.14).
Here is the first homework set:
Here are some Matlab routines that are used in the excerise notes. You may use this for the solution of your homeworks.
You may use Beta Mathematics Handbook and the following formula sheet
The exam will consist of five problems that give maximally 50
points. These problems will be similar to those in the homework
assignments and the tutorial exercises. The preliminary grade levels
are distributed according to the following rule, where the total score
is the sum of your exam score and maximally 6 bonus points from the
homework assignments (max credit is 56 points). These grade limits can
only be modified to your advantage.
The grade FX means that you are allowed to make an appeal, see below.
| Total credit (points) || Grade |
| 45-56 || A |
| 39-44 || B |
| 33-38 || C |
| 28-32 || D |
| 25-27 || E |
| 23-24 || FX |
You need to register for the exam.
Information on how to register for the exam can be found
If your total score (exam score + maximum 6 bonus points from the
homework assignments and the computational exercises) is in the range 23-24
points then you are allowed to do a complementary examination for
grade E. In the complementary examination you will be asked to solve
two problems on your own. The solutions should be handed in to the
examiner in written form and you must be able to defend your solutions
in an oral examination. Contact the examiner no later than three weeks
after the final exam if you want to do a complementary exam.
At the end of the course you will be asked to complete a course
evaluation form online.
Schedule for 2018
| Type || Date || Time || Room || Topic |
| L1 || 2018-08-27 || 13 || E51 || Introduction |
Discrete dynamic programming
| L2 || 2018-08-28 || 10 || E51 || Discrete dynamic programming |
| E1 || 2018-08-29 || 13 || V34 || Discrete dynamic programming |
| L3 || 2018-08-30 || 8 || E31 || Discrete dynamic programming |
Infinite time horizon
| L4 || 2018-09-03 || 13 || E51 || Model predictive control |
| E2 || 2018-09-04 || 10 || Q34 || Model predictive control |
| L5 || 2018-09-05 || 13 || E51 || Dynamic programming |
| E3 || 2018-09-06 || 8 || E51 || Dynamic programming |
| L6 || 2018-09-10 || 13 || Q33 || Dynamic programming |
| L7 || 2018-09-11 || 10 || Q36 || Mathematical preliminaries |
(ODE theory etc)
| L8 || 2018-09-12 || 13 || Q36 || Pontryagins minimum principle |
(PMP) (using small variations)
| E4 || 2018-09-13 || 8 || E51 || PMP I |
| L9 || 2018-09-17 || 13 || D34 || PMP (control constraints) |
| L10 || 2018-09-18 || 10 || Q36 || PMP (optimal control |
to a manifold)
| E5 || 2018-09-19 || 13 || E51 || PMP II: |
Time optimal control
| L11 || 2018-09-24 || 13 || E51 || PMP (generalizations) |
| E6 || 2018-09-25 || 10 || V33 || PMP III |
| L12 || 2018-09-27 || 8 || E51 || PMP applications |
| L13 || 2018-10-01 || 13 || E51 || Topics: Infinite time horizon |
| E7 || 2018-10-02 || 10 || Q33 || PMP IV |
| L14 || 2018-10-03 || 13 || E51 || Computational methods Seminar |
| L15 || 2018-10-09 || 10 || E51 || Topics: Infinite time horizon |
| E8 || 2018-10-10 || 13 || V22 || Infinite time horizon optimal control and |
Review: old exams
| L16 || 2018-10-11 || 8 || Q21 || Review |
| Exam || 2018-10-26 || 8 || || Exam |
Last years exams can be found here:
exam and solutions