Zhong-Ping Jiang and Yu Jiang
Adaptive dynamic programming, nonlinear systems, optimal control, input-to-state stability, small-gain theorem
This paper presents a new approach to the robust optimal control of nonlinear systems with parametric and dynamic uncertainties. The proposed method is novel and significant in several aspects. First, by means of techniques from reinforcement learning and approximate/adaptive dynamic programming, we bypass the difficulty of solving exactly the Hamilton-Jacobi-Bellman (HJB) equation for nonlinear systems. Instead, a recursive learning scheme known as policy iteration is introduced and its convergence is examined in great details. Second, this paper proposes the first solution to computational optimal nonlinear control in the presence of parametric and dynamic uncertainties. Robustness to dynamic uncertainty is systematically studied using nonlinear small-gain theorems appearing in the work of one of the authors. Finally, the main results are supported by rigorous stability analysis and validated by a practical application to a one-machine power system. It is important to notice that the proposed methodology is general and has a potential impact in other fields such as smart electric grid and systems neuroscience.
Important Links:
Go Back