Exceptional GitLab maintenance is scheduled for June 24, 2025. The service will be unavailable between 8:00 am and 9:30 am. Please do not work on the platform until an announcement indicates that maintenance is complete.
Optimal control is an alternative to PID controllers. Rather than computing the action as a function of the error, it is the solution of an optimization problem. Optimal control consists in finding the **control actions that optimize a cost function**, often formulated as a performance measuren and relying on the system's model.
For linear systems, their exist an analytical solution. For non-linear systems (e.g. if we have a constraint on the control action) the optimal solution can be approached numerically. **Model-Predictive Control (MPC)** is a form of optimal control that allows a **trade-off between optimality and computation cost**, by considering optimization on a finite horizon.
# Model-Predictive Control
## Principle
A Model-Predictive Controller consists in the following steps:
1.**Predict** the future evolution of the output, given the model and hypothetical control actions, over a finite horizon,
2.**Optimize** a cost function by soundly selecting a series of actions,
3.**Apply the first action**, and
4.**Repeat** each time step.
Applying only the first action and optimizing at each step has the benefits of a **feedback approach**: it can correct uncertainties in the predictions or the model, and compensate for some disturbances.
## Cost function
The usual formulation of the MPC cost function is design to ensure reference tracking (the output signal should converge to the reference value $y - y_{ref}$) with reasonable efforts (reasonable actions $|u|$):