## 5B1873 Optimal Control, 5 p, spring 2007

### Examiner and lecturer

Ulf Jönsson (ulfj@math.kth.se), room 3711, Lindstedtsv. 25, phone. 790 84 50.

### Tutorial exercises

Enrico Avventi (avventi@kth.se), room 3726, Lindstedtsv. 25, phone. 790 72 20.

### Introduction

Optimal control is the problem of determining the control function for a dynamical system to minimize a performance index. The subject has its roots in the calculus of variations but it evolved to an independent branch of applied mathematics and engineering in the 1950s. The rapid development of the subject during this period was due to two factors. The first are two key innovations, namely the maximum principle by L. S. Pontryagin and the dynamic programming principle by R. Bellman. The second was the space race and the introduction of the digital computer, which lead to the development of numerical algorithms for the solution of optimal control problems. The field of optimal control is still very active and it continues to find new applications in diverse areas such as robotics, finance, economics, and biology.

### Course goal

The goal of the course is to provide an understanding of the main results in optimal control and how they are used in various applications in engineering, economics, logistics, and biology. After the course you should be able to
• describe how the dynamic programming principle works (DynP) and apply it to discrete optimal control problems over finite and infinite time horizons,
• use continuous time dynamic programming and the associated Hamilton-Jacobi-Bellman equation to solve linear quadratic control problems,
• use the Pontryagin Minimum Principle (PMP) to solve optimal control problems with control and state constraints,
• use Model Predictive Control (MPC) to solve optimal control problems with control and state constraints. You should also be able understand the difference between the explicit and implicit MPC control and explain their respective advantages.
• formulate optimal control problems on standard form from specifications on dynamics, constraints and control objective. You should also be able to explain how various control objectives affect the optimal performance.
• explain the principles behind the most standard algorithms for numerical solution of optimal control problems and use Matlab to solve fairly simple but realistic problems.
For the highest grade you should be able to integrate the tools you have learnt during the course and apply them to more complex problems. In particular you should be able to
• explain how PMP and DynP relates to each other and know their respective advantages and disadvantages. In particular, you should be able to describe the difference between feedback control versus open loop control and you should use be able to compare PMP and DynP with respect to computational complexity.
• explain the mathematical methods used to derive the results and combine them to derive the solution to variations of the problems studied in the course.

### Course contents

• Dynamic Programming Discrete dynamic programming, principle of optimality, Hamilton-Jacobi-Bellman equation, verification theorem.
• Pontryagin minimum principle Several versions of Pontryagin Minimum Principle (PMP) will be discussed.
• Infinite Horizon Optimal Control Optimal control over an infinite time horizon, stability, LQ optimal control.
• Model Predictive Control Explicit and implicit model predictive control.
• Applications Examples from economics, logistics, aeronautics, and robotics will be discussed.
• Computational Algorithms The most common methods for numerical solution of optimal control problems are presented.

### Examination

Written exam where you may use Beta Mathematics Handbook and the following formula sheet (ps) (pdf) ).

The next exam will take place on June 7, 2007 at 14:00-19:00 in room V32.

There will be three optional homework sets during the course. They give maximally 3 bonus credits on the exam (each homeweork gives maximally one bonus credit on the exam). The exam will consist of five problems that give a maximum of 25 credits (not including your bonus credits from the homework sets). The grading rule for the exam is that 12 credits guarantee grade 3, 17 credits guarantee grade 4, and 22 credits guarantee grade 5.

• Homework set 1 will be due at 17.00 February 5, 2007. Here is the file [pdf]. You may want to use the following Matlab code [Struktur.m].
• Homework set 2 will be due at 17.00 February 19, 2007. Here is the file [pdf]. You may use the Matlab code below for problem 2.
• Homework set 3 will be due at 17.00 March 1, 2007. Here is the file [pdf].

• Exams and solutions 2007
• Exam March 8, 2007 (pdf)
Solutions to the exam on March 8, 2007 (pdf)
• Solutions to the exam in 5B1872 (pdf)

### Matlab code

Here are some Matlab routines that are used in the excerise notes. You may use this for the solution of your homeworks.

### Appeal

If your total score (exam score + maximum 3 bonus points from the homework assignments and the computational exercises) is 11-12 points then you are allowed to do a complementary examination for grade 3. In the complementary examination you will be asked to solve two problems on your own. The solutions should be handed in to the examiner in written form and you must be able to defend your solutions in an oral examination. Contact the examiner no later than three weeks after the final exam if you want to do a complementary exam.

### Course evaluation

In the end of the course you will be asked to complete a course evaluation form. The evaluation form will also be posted on the course homepage and it can be handed in anonymously in the mailbox opposite to the entrance of "studentexpeditionen" on Lindstedtsv 25. We appreciate your candid feedback on lectures, tutorials, course materials, homeworks and computer exercises. This helps us to continuously improve the course.

### Preliminary schedule

Optimal control problems
L/E Day Date Time Room Topic
L1. Wed 17/1 8-10 E33 Introduction
L2. Thu 18/1 15-17 E36 Discrete dynamic programming
L3. Mon 22/1 10-12 E34 Discrete dynamic programming
E1. Tue 23/1 15-17 E33 Discrete dynamic programming
L4. Wed 24/1 8-10 E33 Discrete PMP and Model Predictive Control
L5. Thu 25/1 15-17 E53 Model Predictive Control
L6. Tue 30/1 15-17 E33 Optimal Control Problems
L7. Wed 31/1 10-12 L21 Dynamic programming
L8. Fri 2/2 13-15 M22 Dynamic programming
L9. Mon 5/2 13-15 E53 Mathematical preliminaries (ODE theory etc)
L10. Tue 6/2 15-17 E33 Pontryagins minimum principle (PMP) (using small variations)
E2. Wed 7/2 15-17 L22 Dynamic programming (II)
L11. Thu 8/2 15-17 L22 PMP (control constraints)
L12. Mon 12/2 8-10 E32 PMP (optimal control to a manifold)
L13. Tue 13/2 15-17 E33 PMP (generalizations)
E3. Wed 14/2 15-17 D32 PMP I.
E4. Thu 15/2 15-17 E53 PMP II: Time optimal control
L14. Mon 19/2 13-15 E53 PMP (applications)
E5. Tue 20/2 15-17 E33 PMP III.
E6. Thu 22/2 15-17 E36 PMP IV
L15. Mon 26/2 13-15 E53 Infinite time horizon optimal control
L16. Tue 27/2 15-17 E53 Sufficient conditions and trajectory tracking
E7. Wed 28/2 15-17 E33 Infinite time horizon optimal control
L17. Thu 1/3 15-17 D41 Computational algorithms

### Welcome!

Further information is available at http://www.math.kth.se/optsyst/studinfo/5B1873/.
5B1873 Optimal control