The distributed optimal output containment control problem for multi-agent systems (MASs) involves coordinating a group of autonomous agents to drive the outputs of all followers into the convex hull spanned by the outputs of the leaders while optimizing system performance, which has numerous applications. In this paper, a fully distributed optimal containment tracking control protocol is established for unknown active heterogeneous MASs with external disturbances. Firstly, a fully distributed observer is designed to ensure its trajectory stays within the convex hull established by active leaders without requiring global network topology information. Subsequently, an augmented system is constructed using the dynamics of the followers and the observers to design H ∞ optimal containment control protocol. Then, a model-free recursive reinforcement learning (RRL) algorithm is devised to learn the optimal control protocol, which demonstrates that the weight iteration error asymptotically converges to zero, and the algorithm exhibits favorable convergence speed. Finally, the effectiveness of the proposed improved algorithm is validated using a heterogeneous nonlinear multi-agent model.
Support the authors with ResearchCoin