Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems
American Journal of Neural Networks and Applications
Volume 1, Issue 1, August 2015, Pages: 1-10
Received: Jun. 14, 2015; Accepted: Jul. 28, 2015; Published: Jul. 29, 2015
Views 4129      Downloads 68
Authors
Valerii N. Azarskov, Faculty of Computer Science, National Aviation University, Kiev, Ukraine
Dmytro P. Kucherov, Faculty of Computer Science, National Aviation University, Kiev, Ukraine
Sergii A. Nikolaienko, Cybernetics Centre, Dept. of Automated Data Processing Systems, Kiev, Ukraine
Leonid S. Zhiteckii, Cybernetics Centre, Dept. of Automated Data Processing Systems, Kiev, Ukraine
Article Tools
Follow on us
Abstract
This paper deals with studying the asymptotical properties of multilayer neural networks models used for the adaptive identification of wide class of nonlinearly parameterized systems in stochastic environment. To adjust the neural network’s weights, the standard online gradient type learning algorithms are employed. The learning set is assumed to be infinite but bounded. The Lyapunov-like tool is utilized to analyze the ultimate behaviour of learning processes in the presence of stochastic input variables. New sufficient conditions guaranteeing the global convergence of these algorithms in the stochastic frameworks are derived. The main their feature is that they need no a penalty term to achieve the boundedness of weight sequence. To demonstrate asymptotic behaviour of the learning algorithms and support the theoretical studies, some simulation examples are also given
Keywords
Neural Network, Nonlinear Model, Gradient Learning Algorithm, Stochastic Environment, Convergence
To cite this article
Valerii N. Azarskov, Dmytro P. Kucherov, Sergii A. Nikolaienko, Leonid S. Zhiteckii, Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems, American Journal of Neural Networks and Applications. Vol. 1, No. 1, 2015, pp. 1-10. doi: 10.11648/j.ajnna.20150101.11
References
[1]
J. Suykens, and B. D. Moor, “Nonlinear system identification using multilayer neural networks: some ideas for initial weights, number of hidden neurons and error criteria,” in Proc. 12nd IFAC World Congress, vol. 3. Sydney, Australia, July 1993, pp. 49–52.
[2]
E. S. Kosmatopoulos, M. M. Polycarpou, M. A. Christodoulou, and P.A. Ioannou, “High-order neural network structures for identification of dynamical systems,” IEEE Trans. on Neural Networks, vol. 6, pp. 422–431, 1995.
[3]
A. U. Levin, and K. S. Narendra, “Recursive identification using feedforward neural networks,” Int. J. Contr vol. 61, pp. 533–547, 1995.
[4]
Ya. Z. Tsypkin, J. D. Mason, E. D. Avedyan, K. Warwick, I. K. Levin, “Neural networks for identification of nonlinear systems under random piecewise polynomial disturbances,” IEEE Trans. on Neural Networks, vol. 10, pp. 303–311, 1999.
[5]
G. Cybenko, “Approximation by superpositions of a sigmoidal functions,” Math. Control, Signals, Syst., vol. 2, pp. 303–313, 1989.
[6]
K. Funahashi, “On the approximate realization of continuous mappings by neural networks,” Neural Networks, vol. 2, pp. 182–192, 1989.
[7]
Ya. Z. Tsypkin, Adaptation and Learning in Automatic Systems, New-York: Academic Press, 1971.
[8]
L. Behera, S. Kumar, and A. Patnaik, “On adaptive learning rate that guarantees convergence in feedforward networks,” IEEE Trans. on Neural Networks, vol. 17, pp. 1116–1125, 2006.
[9]
H. White, “Some asymptotic results for learning in single hidden-layer neural network models,” J. Amer. Statist. Assoc., vol. 84, pp. 117–134, 1987.
[10]
C. M. Kuan, and K. Hornik, “Convergence of learning algorithms with constant learning rates,” IEEE Trans. on Neural Networks, vol. 2, pp. 484 – 489, 1991.
[11]
Z. Luo, “On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks,” Neural Comput., vol. 3, pp. 226–245, 1991.
[12]
W. Finnoff, “Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima,” Neural Comput., 6, pp. 285– 295, 1994.
[13]
A. A. Gaivoronski, “Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods,” Optim. Methods Software 4, pp. 117–134, 1994.
[14]
T. L. Fine, and S. Mukherjee, “Parameter convergence and learning curves for neural networks,” Neural Comput. 11, pp. 749–769, 1999.
[15]
V.Tadic, and S. Stankovic, “Learning in neural networks by normalized stochastic gradient algorithm: Local convergence,” in Proc. 5th Seminar Neural Netw. Appl. Electr. Eng., pp. 11–17, (Yugoslavia,Sept. 2000).
[16]
H. Zhang, W. Wu, F. Liu, and M.Yao, “Boundedness and convergence of online gradient method with penalty for feedforward neural networks,” IEEE Trans. on Neural Networks, vol. 20, pp. 1050–1054, 2009.
[17]
O. L. Mangasarian, and M. V.Solodov, “Serial and parallel backpropagation convergence via nonmonotone perturbed minimization,” Optim. Methods Software, pp. 103–106, 1994.
[18]
W. Wu, G. Feng, and X. Li, “Training multilayer perceptrons via minimization of ridge functions,” Advances in Comput. Mathematics, vol. 17, pp. 331–347, 2002.
[19]
N. Zhang, W. Wu, and G. Zheng, “Convergence of gradient method with momentum for two-layer feedforward neural networks,” IEEE Trans. on Neural Networks, vol. 17, pp. 522–525, 2006.
[20]
W. Wu, G. Feng, X. Li, and Y. Xu, “Deterministic convergence of an online gradient method for BP neural networks,” IEEE Trans. on Neural Networks, vol. 16, pp. 1–9, 2005.
[21]
Z. B. Xu, R. Zhang, and W.F. Jing, “When does online BP training converge?” IEEE Trans. on Neural Networks, vol. 20, pp. 1529–1539, 2009.
[22]
H. Shao, W. Wu, and L. Liu, “Convergence and monotonicity of an online gradient method with penalty for neural networks,” WSEAS Trans. Math., vol. 6, pp. 469–476, 2007.
[23]
S. W. Ellacott, “The numerical analysis approach,” Mathematical Approaches to Neural Networks (J.G. Taylor, ed; B.V.: Elsevier Science Publisher), pp. 103–137, 1993.
[24]
F. P.Skantze, A. Kojic, A. P. Loh, and A. M. Annaswamy, “Adaptive estimation of discrete time systems with nonlinear parameterization,” Automatica, vol. 36, pp. 1879–1887, 2000.
[25]
M. Loeve, Probability Theory New-York: Springer-Verlag, 1963.
[26]
L. S. Zhiteckii, V. N. Azarskov, and S. A. Nikolaienko, “Convergence of learning algorithms in neural networks for adaptive identification of nonlinearly parameterized systems,” in Proc. 16th IFAC Symposium on System Identification (Brussels, Belgium), pp. 1593–1598, 2012.
[27]
V. N. Azarskov, L. S. Zhiteckii, and S. A. Nikolaienko, “Sequential learning processes in neural networks applied as models of nonlinear systems,” Electronics and Control Systems, no. 3(37), pp. 124–132, 2013.
[28]
B. T. Polyak, “Convergence and convergence rate of iterative stochastic algorithms, I: General case,” Autom. Remote Control, vol. 12, pp. 1858–1868, 1976.
[29]
G. C. Goodwin, and K. S.Sin, Adaptive Filtering, Prediction and Control Engewood Cliffs, NJ.: Prentice-Hall, 1984.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186