Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms
American Journal of Neural Networks and Applications
Volume 2, Issue 1, February 2016, Pages: 1-5
Received: Mar. 14, 2016; Accepted: Mar. 30, 2016; Published: May 10, 2016
Views 3812      Downloads 155
Authors
Kh. Sh. Mohamed, Mathematical Department, College of Science, Dalanj University, Dalanj, Sudan; School of Mathematical Sciences, Dalian University of Technology, Dalian, China
Xiong Yan, School of Science, Liaoning University of Science & Technology, Anshan, China
Y. Sh. Mohammed, Physics Department, College of Education, Dalanj University, Dalanj, Sudan; Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, Saudi Arabia
Abd-Elmoniem A. Elzain, Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, Saudi Arabia; Department Department of Physics, University of Kassala, Kassala, Sudan
Habtamu Z. A., School of Mathematical Sciences, Dalian University of Technology, Dalian, China
Abdrhaman M. Adam, School of Mathematical Sciences, Dalian University of Technology, Dalian, China
Article Tools
Follow on us
Abstract
This paper investigates an online gradient method with inner- penalty for a novel feed forward network it is called pi-sigma network. This network utilizes product cells as the output units to indirectly incorporate the capabilities of higher-order networks while using a fewer number of weights and processing units. Penalty term methods have been widely used to improve the generalization performance of feed forward neural networks and to control the magnitude of the network weights. The monotonicity of the error function and weight boundedness with inner- penalty term and both weak and strong convergence theorems in the training iteration are proved.
Keywords
Convergence, Pi-sigma Network, Online Gradient Method, Inner-penalty, Boundedness
To cite this article
Kh. Sh. Mohamed, Xiong Yan, Y. Sh. Mohammed, Abd-Elmoniem A. Elzain, Habtamu Z. A., Abdrhaman M. Adam, Convergence of Online Gradient Method for Pi-sigma Neural Networks with Inner-penalty Terms, American Journal of Neural Networks and Applications. Vol. 2, No. 1, 2016, pp. 1-5. doi: 10.11648/j.ajnna.20160201.11
Copyright
Copyright © 2016 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
References
[1]
A J Hussaina and P Liatsisb, Recurrent pi-sigma networks for DPCM image coding. Neurocomputing, 55(2002) 363-382.
[2]
Y Shin and J Ghosh, The pi-sigma network: An efficient higher-order neural network for pattern classification and function approximation. International Joint Conference on Neural Networks, 1(1991) 13–18.
[3]
P L Bartlett, For valid generalization, the size of the weights is more important than the size of the network, Advances in Neural Information Processing Systems 9 (1997) 134–140.
[4]
L J Jiang, F Xu and S R Piao, Application of pi-sigma neural network to real-time classification of seafloor sediments. Applied Acoustics, 24(2005) 346–350.
[5]
R Reed, Pruning algorithms-a survey. IEEE Transactions on Neural Networks 8 (1997) 185–204.
[6]
G Hinton, Connectionist learning procedures, Artificial Intelligence 40(1989)185-243.
[7]
S Geman, E Bienenstock, R Doursat, Neural networks and the bias/variance dilemma, Neural Computation 4 (1992) 1–58.
[8]
S Loone and G Irwin, Improving neural network training solutions using regularisation,Neurocomputing37(2001)71-90.
[9]
A S Weigend, D E Rumelhart and B A Huberman, Generalization by weight-elimination applied to currency exchange rate prediction. Proc. Intl Joint Conf. on Neural Networks 1(Seatle, 19916) 837- 841.
[10]
Y Shin and J Ghosh, Approximation of multivariate functions using ridge polynomial networks, International Joint Conference on Neural Networks 2 (1992) 380-385.
[11]
M Sinha, K Kumar and P K Kalra, Some new neural network architectures with improved learning schemes. Soft Computing, 4 (2000) 214-223.
[12]
R Setiono, A penalty-function approach for pruning feed forward neural networks, Neural Networks 9 (1997) 185–204.
[13]
W Wu and Y S Xu, Deterministic convergence of an online gradient method for neural networks, Journal of Computational and Applied Mathematics 144 (1-2) (2002) 335-347.
[14]
H S Zhang and W Wu, Boundedness and convergence of online gradient method with penalty for linear output feed forward neural networks, Neural Process Letters 29 (2009) 205–212.
[15]
H F Lu, W Wu, C Zhang and X Yan, Convergence of Gradient Descent Algorithm for Pi- Sigma Neural Networks, Journal of Information and Computational Science 3: 3 (2006) 503-509.
[16]
YX Yuan and WY Sun, Optimization Theory and Methods, Science Press, Beijing, 2001.
[17]
J Kong and W Wu, Online gradient methods with a punishing term for neural networks. Northeast Math. J 1736(2001) 371-378.
[18]
W Wu, G R Feng and X Z Li, Training multiple perceptrons via minimization of sum of ridge functions, Advances in Computational Mathematics 17 (2002) 331-347.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186