基于高次指数平滑动态边界限制的深度学习优化算法

Deep Learning Optimization Algorithm Based on High Order Exponential Smoothing Dynamic Boundary Constraint

  • 摘要: 针对自适应算法存在的不收敛问题,提出了一种Adam改进算法.通过引入多个超参数,并进行多次指数平滑,来弥补一次指数平滑的不足.此外,对二阶动量计算加以修正,预防了二阶动量数据发生不良的波动,从而达到平滑非预期的大学习率的效果.分别在Resnet模型和Densenet模型上对cifar10和cifar100数据集进行了对比实验.从实验可以看出,所提算法适用于不同的模型结构和不同数据集,与Adam算法相比,其准确率平均提升了1.3%,同时为收敛问题提供了一种有效的解决方案.

     

    Abstract: To solve the problem of non-convergence of the adaptive algorithm, we propose an improved Adam algorithm. By introducing multiple hyperparameters and performing exponential smoothing for many times, it makes up for the deficiency of one exponential smoothing. In addition, we modify the second-order momentum calculation to prevent the bad fluctuation of the second-order momentum data, so as to smooth the unexpected university study rate. We conduct comparative experiments on cifar10 and cifar100 datasets based on the Resnet model and the Densenet model. As we can see from the experiment, the proposed algorithm is suitable for different model structures and different data sets. Compared with Adam algorithm, the accuracy of the proposed algorithm is improved by 1.3% on average, and the algorithm provides an effective solution for the convergence problem.

     

/

返回文章
返回