Loading…

Reinforcement Learning-Based Cooperative Optimal Output Regulation via Distributed Adaptive Internal Model

In this article, a data-driven distributed control method is proposed to solve the cooperative optimal output regulation problem of leader-follower multiagent systems. Different from traditional studies on cooperative output regulation, a distributed adaptive internal model is originally developed,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transaction on neural networks and learning systems 2022-10, Vol.33 (10), p.5229-5240
Main Authors: Gao, Weinan, Mynuddin, Mohammed, Wunsch, Donald C., Jiang, Zhong-Ping
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, a data-driven distributed control method is proposed to solve the cooperative optimal output regulation problem of leader-follower multiagent systems. Different from traditional studies on cooperative output regulation, a distributed adaptive internal model is originally developed, which includes a distributed internal model and a distributed observer to estimate the leader's dynamics. Without relying on the dynamics of multiagent systems, we have proposed two reinforcement learning algorithms, policy iteration and value iteration, to learn the optimal controller through online input and state data, and estimated values of the leader's state. By combining these methods, we have established a basis for connecting data-distributed control methods with adaptive dynamic programming approaches in general since these are the theoretical foundation from which they are built.
ISSN:2162-237X
2162-2388
DOI:10.1109/TNNLS.2021.3069728