宋绍剑, 向伟康, 林小峰. 增量型极限学习机改进算法[J]. 信息与控制, 2016, 45(6): 735-741,758. DOI: 10.13976/j.cnki.xk.2016.0735
引用本文: 宋绍剑, 向伟康, 林小峰. 增量型极限学习机改进算法[J]. 信息与控制, 2016, 45(6): 735-741,758. DOI: 10.13976/j.cnki.xk.2016.0735
SONG Shaojian, XIANG Weikang, LIN Xiaofeng. Improvement Algorithm of an Incremental Extreme Learning Machine[J]. INFORMATION AND CONTROL, 2016, 45(6): 735-741,758. DOI: 10.13976/j.cnki.xk.2016.0735
Citation: SONG Shaojian, XIANG Weikang, LIN Xiaofeng. Improvement Algorithm of an Incremental Extreme Learning Machine[J]. INFORMATION AND CONTROL, 2016, 45(6): 735-741,758. DOI: 10.13976/j.cnki.xk.2016.0735

增量型极限学习机改进算法

Improvement Algorithm of an Incremental Extreme Learning Machine

  • 摘要: 增量型极限学习机(incremental extreme learning machine,I-ELM)在训练过程中,由于输入权值及隐层神经元阈值的随机获取,造成部分隐层神经元的输出权值过小,使其对网络输出贡献小,从而成为无效神经元.这个问题不但使网络变得更加复杂,而且降低了网络的稳定性.针对此问题,本文提出了一种给I-ELM隐层输出加上偏置的改进方法(即Ⅱ-ELM),并分析证明了该偏置的存在性.最后对I-ELM方法在分类和回归问题上进行仿真对比,验证Ⅱ-ELM的有效性.

     

    Abstract: Given that the input weights and threshold of hidden layer neurons are obtained randomly, in the training process of an incremental extreme learning machine (I-ELM), the output weights of some hidden layer neurons may be too small to contribute effectively to the network output. This causes the neurons to be invalid. This problem not only makes the network more complicated, but also reduces the stability of the network. To deal with this issue, we propose in this study an improved method that adds an offset to hidden layer output of I-ELM (II-ELM). Then, we analyze and prove the existence of the offset. Finally, the validity of II-ELM is verified by simulationand comparison with the I-ELM in classification and regression problems.

     

/

返回文章
返回