Abstract:
To improve the control performance of wireless networked control system (WNCS) for fully cooperative multi-agent tasks in environments with limited radio resources, a Deep Reinforcement Learning (DRL)-based estimation-control-scheduling co-design method is proposed, which tightly integrates state estimation, control strategies, and resource scheduling to optimize multi-agent decision control and resource scheduling. The proposed method adopts a DRL strategy with recurrent neural networks to capture the temporal dependencies between observations and states in WNCS, thereby enhancing its adaptability in complex industrial environments while reducing reliance on accurate system dynamics models. Experimental results from multi-agent cooperative transportation tasks conducted on the CoppeliaSim simulation platform demonstrate that, compared to existing decoupled design methods, the proposed approach improves the cooperative transportation task success rate by 3.8% and reduces task completion time by 7.6%.