The main part of the thesis revolves around minimax formulations of MPC for uncertain constrained linear discrete-time systems. A minimax strategy in MPC means that worst-case performance with respect to uncertainties is optimized. Unfortunately, many minimax MPC formulations yield intractable optimization problems with exponential complexity.
Minimax algorithms for a number of uncertainty models are derived in the thesis. These include systems with bounded external additive disturbances, systems with uncertain gain, and systems described with linear fractional transformations. The central theme in the different algorithms is semidefinite relaxations. This means that the minimax problems are written as uncertain semidefinite programs, and then conservatively approximated using robust optimization theory. The result is an optimization problem with polynomial complexity.
The use of semidefinite relaxations enables a framework that allows extensions of the basic algorithms, such as joint minimax control and estimation, and approx- imation of closed-loop minimax MPC using a convex programming framework. Additional topics include development of an efficient optimization algorithm to solve the resulting semidefinite programs and connections between deterministic minimax MPC and stochastic risk-sensitive control.
The remaining part of the thesis is devoted to stability issues in MPC for continuous-time nonlinear unconstrained systems. While stability of MPC for un-constrained linear systems essentially is solved with the linear quadratic controller, no such simple solution exists in the nonlinear case. It is shown how tools from modern nonlinear control theory can be used to synthesize finite horizon MPC controllers with guaranteed stability, and more importantly, how some of the tech- nical assumptions in the literature can be dispensed with by using a slightly more complex controller.