Improvement of Echo State Network Generalization by Selective Ensemble Learning Based on BPSO
Automation, Control and Intelligent Systems
Volume 4, Issue 6, December 2016, Pages: 84-88
Received: Nov. 29, 2016;
Published: Dec. 1, 2016
Views 3306 Downloads 108
Xiaodong Zhang, Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
Xuefeng Yan, Key Laboratory of Advanced Control and Optimization for Chemical Processes of Ministry of Education, East China University of Science and Technology, Shanghai, P. R. China
The Echo State Network (ESN) is a novel and special type of recurrent neural network that has become increasingly popular in machine learning domains such as time series forecasting, data clustering, and nonlinear system identification. This network is characterized by large randomly constructed recurrent neural networks (RNN) called “reservoir”, in which the neurons are sparsely connected and the weights remain unchanged during training, leaving the simple training of the output layer. However, the reservoir is criticized for its randomness and instability because of the random initialization of the connectivity and weights. In this article, we introduced the selective ensemble learning based on BPSO to improve the generalization performance of ESN. Two widely studied tasks are used to prove the feasibility and priority of the selective ESN ensemble based on BPSO(SESNE-BPSO) model. And the results indicate that the SESNE-BPSO model performs better than the general ESN ensemble, the single standard ESN and several other improved ESN models.
Improvement of Echo State Network Generalization by Selective Ensemble Learning Based on BPSO, Automation, Control and Intelligent Systems.
Vol. 4, No. 6,
2016, pp. 84-88.
B. Schrauwen, D. Verstraeten, J. Van Campenhout, An overview of reservoir computing: theory, applications and implementations, Proceedings of the 15th European Symposium on Artificial Neural Networks. p. 471-482 20072007), pp. 471-482.
M. LukošEvičIus, H. Jaeger, Reservoir computing approaches to recurrent neural network training, Computer Science Review, 3 (2009) 127-149.
H. Jaeger, Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the" echo state network" approach (GMD-Forschungszentrum Informationstechnik, 2002).
H. Jaeger, Reservoir riddles: Suggestions for echo state network research, Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005., (IEEE2005), pp. 1460-1462.
W. Maass, Liquid state machines: motivation, theory, and applications, Computability in context: computation and logic in the real world, (2010) 275-296.
J. Schmidhuber, D. Wierstra, M. Gagliolo, F. Gomez, Training recurrent networks by evolino, Neural computation, 19 (2007) 757-779.
H. Jaeger, Adaptive nonlinear system identification with echo state networks, Advances in neural information processing systems2002), pp. 593-600.
H. Jaeger, H. Haas, Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication, Science, 304 (2004) 78-80.
S.-X. Lun, X.-S. Yao, H.-Y. Qi, H.-F. Hu, A novel model of leaky integrator echo state network for time-series prediction, Neurocomputing, 159 (2015) 58-66.
C. Zhang, Y. Ma, Ensemble machine learning (Springer, 2012).
D. West, S. Dellana, J. Qian, Neural network ensemble strategies for financial decision applications, Computers & operations research, 32 (2005) 2543-2559.
L. K. Hansen, P. Salamon, Neural network ensembles, IEEE transactions on pattern analysis and machine intelligence, 12 (1990) 993-1001.
Z.-H. Zhou, Ensemble learning, Encyclopedia of Biometrics, (2015) 411-416.
L. K. Hansen, P. Salamon, Neural network ensembles, IEEE Transactions on Pattern Analysis & Machine Intelligence, (1990) 993-1001.
G. Valentini, T. G. Dietterich, Bias—Variance Analysis and Ensembles of SVM, International Workshop on Multiple Classifier Systems, (Springer2002), pp. 222-231.
L. I. Kuncheva, C. J. Whitaker, Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy, Machine learning, 51 (2003) 181-207.
Z.-H. Zhou, J. Wu, W. Tang, Ensembling neural networks: many could be better than all, Artificial intelligence, 137 (2002) 239-263.
L. Davis, Handbook of genetic algorithms, (1991).
J. Kennedy, Particle swarm optimization, Encyclopedia of machine learning, (Springer, 2011), pp. 760-766.
J. Kennedy, R. C. Eberhart, A discrete binary version of the particle swarm algorithm, Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation., 1997 IEEE International Conference on, (IEEE1997), pp. 4104-4108.
H. Jaeger, The “echo state” approach to analysing and training recurrent neural networks-with an erratum note, Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148 (2001) 34.
R. C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, Proceedings of the sixth international symposium on micro machine and human science, (New York, NY1995), pp. 39-43.
A. Weigend, N. Gershenfeld, Time Series Prediction: Forecasting the Future and Understanding the Past. 1994, Proceedings of a NATO Advanced Research Workshop on Comparative Time Series Analysis, held in Santa Fe, New Mexico).
S. Basterrech, An Empirical Study of the L2-Boost technique with Echo State Networks, arXiv preprint arXiv:1501.00503, (2015).
Z. Deng, Y. Zhang, Collective behavior of a small-world recurrent neural system with scale-free distribution, IEEE Transactions on Neural Networks, 18 (2007) 1364-1375.