To control mobile robots to efficiently perform obstacle avoidance in crowded and complex environments, a mobile robot obstacle avoidance algorithm based on deep reinforcement learning in the human-robot integration environment is proposed. First, in response to the lack of learning capability of the value network of deep reinforcement learning algorithms, the value function network is improved based on crowd interaction. The crowd information is exchanged through the angel pedestrian grid. The temporal characteristics of a single pedestrian are then extracted through an attention mechanism, which learns the relative importance of historical trajectory state and joint impact on the obstacle avoidance strategy of the robot, providing a first step for the subsequent learning of the multilayer perceptron. Next, a reward function was developed for reinforcement learning based on human spatial behavior. The state where the robot angle changes significantly is punished to achieve the requirements of comfortable obstacle avoidance. Finally, the feasibility and effectiveness of the proposed algorithm in crowded and complex environments are verified through simulation experiments.