林泽建, 王骏逵, 谢俊明, 李珍妮, 谢胜利. 无偏稀疏正则化的双策略结构神经网络压缩[J]. 信息与控制, 2023, 52(3): 313-325. DOI: 10.13976/j.cnki.xk.2023.2211
引用本文: 林泽建, 王骏逵, 谢俊明, 李珍妮, 谢胜利. 无偏稀疏正则化的双策略结构神经网络压缩[J]. 信息与控制, 2023, 52(3): 313-325. DOI: 10.13976/j.cnki.xk.2023.2211
LIN Zejian, WANG Junkui, XIE Junming, LI Zhenni, XIE Shengli. Unbiased Sparse Regularization for Compression of Dual-strategy Structured Neural Networks[J]. INFORMATION AND CONTROL, 2023, 52(3): 313-325. DOI: 10.13976/j.cnki.xk.2023.2211
Citation: LIN Zejian, WANG Junkui, XIE Junming, LI Zhenni, XIE Shengli. Unbiased Sparse Regularization for Compression of Dual-strategy Structured Neural Networks[J]. INFORMATION AND CONTROL, 2023, 52(3): 313-325. DOI: 10.13976/j.cnki.xk.2023.2211

无偏稀疏正则化的双策略结构神经网络压缩

Unbiased Sparse Regularization for Compression of Dual-strategy Structured Neural Networks

  • 摘要: 虽然深度神经网络模型的性能十分出色,但目前网络存在规模庞大、权重冗余度高的问题。同时,现有对网络权重剪枝的正则子估测偏差大。因此,本文提出无偏稀疏正则化的双策略结构神经网络压缩。首先,本文将神经网络所连接权重视为一组,提出采用估测值偏差小的非线性拉普拉斯函数,构建组间无偏结构稀疏正则子和组内无偏结构稀疏正则子,对冗余神经元和剩余神经元的冗余权重分别进行稀疏约束,构建无偏稀疏正则化的双策略结构神经网络压缩模型。其次,针对所设计的无偏稀疏正则化的网络压缩优化难题,本文采用近端算子技术获得无偏稀疏正则子的闭式解,进而设计基于近端梯度下降法的反向传播算法,实现神经网络准确的结构压缩。最后,通过在数据集MNIST、FashionMNIST和Cifar-10进行实验验证,本文所提出的无偏稀疏正则子的双策略结构神经网络压缩不仅收敛速度快于目前主流正则子。而且在压缩率保持一致的情况下,相比已有的方法识别精度平均提升2.3%,在识别保持精度基本一致的情况下,相对已有方法平均提升11.5%的压缩率。

     

    Abstract: Although the deep neural network model has outstanding performance, it is associated with issues of huge scale and large weight redundancy. In addition, the existing regularizer estimation bias for network weight pruning is large. We propose an unbiased sparse regularization for the compression of dual-strategy structured neural networks. For this, we first consider the weights connected by the neural network as a group. Then, we construct an unbiased structured sparse regularizer between groups and an unbiased structured sparse regularizer within a group by using a nonlinear Laplace function with a small deviation of the estimated value. The redundant weights of neurons and remaining output neurons are sparsely constrained, and an unbiased sparse regularization dual-strategy neural network compression model is constructed. Secondly, in view of the designed unbiased sparse regularization problem of network compression optimization, we use the proximal operator technology to obtain the closed-form solution of the unbiased sparse regularizer, and then design a back-propagation algorithm based on the proximal gradient descent method to achieve accurate structural compression of neural networks. Finally, the proposed unbiased sparse regularizer dual-strategy neural network compression is proved that the convergence is faster than the current mainstream regularizers through experiments on the datasets MNIST, FashionMNIST and Cifar-10. Moreover, the recognition accuracy is improved by an average of 2.3% compared with the existing methods under the same compression rate, and the compression rate is improved by an average of 11.5% compared to the existing methods under the same recognition accuracy.

     

/

返回文章
返回