HAN Jingtong, YU Qian, LIU Yuan. Application of Deep Reinforcement Learning-based Maritime Search and Rescue Coverage Path Planning Algorithm[J]. INFORMATION AND CONTROL, 2025, 54(4): 545-555. DOI: 10.13976/j.cnki.xk.2024.2122
Citation: HAN Jingtong, YU Qian, LIU Yuan. Application of Deep Reinforcement Learning-based Maritime Search and Rescue Coverage Path Planning Algorithm[J]. INFORMATION AND CONTROL, 2025, 54(4): 545-555. DOI: 10.13976/j.cnki.xk.2024.2122

Application of Deep Reinforcement Learning-based Maritime Search and Rescue Coverage Path Planning Algorithm

  • Given that current maritime search and rescue (SAR) decision support systems still rely on traditional fixed search patterns, which are inefficient and lack adaptability, we propose a maritime SAR coverage path planning model based on deep reinforcement learning. First, we formulate the maritime SAR coverage path planning problem as a Markov decision process. Then, by integrating a double deep Q-network (DDQN), prioritized DDQN, distributional DQN, and noisy DQN, we design a coverage path planning algorithm tailored for a single rescue vessel. Finally, we validate the feasibility and effectiveness of the proposed algorithm through simulation experiments. Comparison results demonstrate that the proposed algorithm substantially outperforms existing methods in path planning quality and search efficiency.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return