Abstract:
Accurate recognition of the lower limb movement intentions of amputees is the key to improving the human-machine interaction performance of lower limb prosthetics and reducing the energy consumption of prosthetic users. We use a self-designed signal acquisition system to collect lower limb kinematic signals from femoral amputees and healthy subjects in seven different gait modes, including thigh residual force myography signal (FMG) signals and signals such as leg angle and acceleration from the six degrees of freedom inertial measurement unit (IMU). In addition, the time-domain features of the two signals are fused using a feature fusion method. Machine learning methods are used to investigate the accuracy of lower limb gait pattern classification under the fusion of different information sources. The research results show that by using the three classification algorithms, the average classification accuracy of FMG-IMU signals for amputees and healthy subjects increase by 4.7% and 9.5%, respectively, compared with the average classification accuracy of single FMG signal and single kinematic signal training models, with a maximum average classification accuracy of 99.6%. These results indicate that the lower limb motion intention recognition scheme based on the fusion of FMG and IMU signals can achieve good results and is expected to further expand research on FMG signals in the field of lower limb motion intention recognition. Furthermore, these findings provide theoretical support and practical guidance for the further improvement and optimization of lower limb prostheses.