Stability Issues of Stochastic Optimal Control Strategies

Jan Rathouský and Jan Štecha

Keywords

Stochastic optimal control, Cautious control, Stability

Abstract

When dealing with uncertainties in system descriptions, it is possible to describe the system parameters as random variables. When control of such systems becomes the issue, such approach usually leads to stochastic optimal control strategies based on criterion mean minimization. The cautious stochastic optimal control strategy is a one such strategy based on the assumption that the parameters are independent and identically distributed random variables. This assumption makes it easy to optimize the criterion via stochastic dynamic programming. While it is not widely used strategy by itself, several methods exist that use the cautious control to design controllers with dual properties. This paper analyzes cautious control in terms of stability of the closed loop system and criterion convergence. As the system parameters are random, stability can be only achieved with a certain probability. It is illustrated that cautious control is not suitable for nominally unstable systems, as it might fail to stabilize the nominal system and the resulting stability probability is then lower compared to the certainty equivalent strategy. Qualitative results are derived for a first-order system, while higher order systems are dealt with by Monte Carlo simulations.

Important Links:



Go Back