Abstract:
Large-scale partially observable Markov decision process (POMDP) suffers from the exponential growth of the policy tree and the difficulty of finding witness points (WPs). Based on the piecewise linearity and convexity of the value function, a belief point-based algorithm is proposed for policy tree incremental pruning and value iteration solution. When policy trees are generating, the algorithm uses boundary points for non-destructive pruning, and exploits intermediate points for destructive pruning. It also makes use of realtime belief states to solve approximate optimal solution. Comparison experiment results show that the proposed algorithm converges quickly and achieve high reward within less time.