Abstract:
To solve the problems of non-negative return value and non-activation of neuron in the ReLU activation function of neural network, we propose a new ReLU-like activation function called Leafspring activation function. LeafSpring not only inherits the advantages of ReLU but also returns negative values, which is more versatile. The derivation and properties of the LeafSpring activation function are discussed. In the LeafSpring activation function, we introduce the stiffness coefficient
k, which can adjust the weight coefficient of two adjacent layers by training. To reduce the amount of computation, LeafSpring can be simplified to ReLU or Softplus in some cases. The performance of LeafSpring activation function is shown by comparing its fitting accuracy with that of ReLU, Softplus, and Sigmoid on four different types of datasets. Comparison results show that LeafSpring can give consideration to fitting accuracy and convergence speed in different datasets, and it can fit complex nonlinear functions faster and more accurately on a small grid scale.