Abstract:
Although the deep neural network model has outstanding performance, it is associated with issues of huge scale and large weight redundancy. In addition, the existing regularizer estimation bias for network weight pruning is large. We propose an unbiased sparse regularization for the compression of dual-strategy structured neural networks. For this, we first consider the weights connected by the neural network as a group. Then, we construct an unbiased structured sparse regularizer between groups and an unbiased structured sparse regularizer within a group by using a nonlinear Laplace function with a small deviation of the estimated value. The redundant weights of neurons and remaining output neurons are sparsely constrained, and an unbiased sparse regularization dual-strategy neural network compression model is constructed. Secondly, in view of the designed unbiased sparse regularization problem of network compression optimization, we use the proximal operator technology to obtain the closed-form solution of the unbiased sparse regularizer, and then design a back-propagation algorithm based on the proximal gradient descent method to achieve accurate structural compression of neural networks. Finally, the proposed unbiased sparse regularizer dual-strategy neural network compression is proved that the convergence is faster than the current mainstream regularizers through experiments on the datasets MNIST, FashionMNIST and Cifar-10. Moreover, the recognition accuracy is improved by an average of 2.3% compared with the existing methods under the same compression rate, and the compression rate is improved by an average of 11.5% compared to the existing methods under the same recognition accuracy.