Please enter verification code
Confirm
Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems
American Journal of Software Engineering and Applications
Volume 5, Issue 3-1, May 2016, Pages: 25-29
Received: Sep. 14, 2016; Accepted: Sep. 23, 2016; Published: Aug. 21, 2017
Views 2041      Downloads 82
Authors
Sima Saeed, Department of Computer Engineering, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
Aliakbar Niknafs, Department of Computer Engineering, Faculty of Engineering, Shahid Bahonar University of Kerman, Kerman, Iran
Article Tools
Follow on us
Abstract
Designing the fuzzy controllers by using evolutionary algorithms and reinforcement learning is an important subject to control the robots. In the present article, some methods to solve reinforcement fuzzy control problems are studied. All these methods have been established by combining Fuzzy-Q Learning with an optimization algorithm. These algorithms include the Ant colony, Bee Colony and Artificial Bee Colony optimization algorithms. Comparing these algorithms on solving Track Backer-Upper problem –a reinforcement fuzzy control problem– shows that Artificial Bee Colony Optimization algorithm has the best efficiency in combining with fuzzy- Q Learning.
Keywords
Mobile Robot, Fuzzy-Qlearning, Ant Colony Optimization-Fuzzy Q Learning, Bee Colony Optimization-Fuzzy-Q Learning, Artificial Bee Colony-Fuzzy Q Learning
To cite this article
Sima Saeed, Aliakbar Niknafs, Optimization Algorithms Incorporated Fuzzy Q-Learning for Solving Mobile Robot Control Problems, American Journal of Software Engineering and Applications. Special Issue: Advances in Computer Science and Information Technology in Developing Countries. Vol. 5, No. 3-1, 2016, pp. 25-29. doi: 10.11648/j.ajsea.s.2016050301.16
Copyright
Copyright © 2016 Authors retain the copyright of this article.
This article is an open access article distributed under the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
References
[1]
H. R̒. Berenji, “Fuzzy Q-learning for generalization of reinforcement,” IEEE Int. Conf. Fuzzy Syst, 1996.
[2]
P. Y. Glorennec, “Fuzzy Q-learning and dynamic fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst., Orlando, 1994.
[3]
P. Y. Glorennec, L. Jouffe, “Fuzzy Q-learning,” IEEE Int. Conf. Fuzzy Syst, 1997.
[4]
L. Jouffe, “Fuzzy, inference system learning by reinforcement methods̕,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., Vol. 28 (3), pp. 338–355, 1998.
[5]
C. F. Juang, “Ant Colony Optimization Incorporated With Fuzzy Q-Learning for Reinforcement Fuzzy Control,” IEEE Transactions on systems, man, and cybernetics—part a: systems and humans, Vol. 39, May 2009.
[6]
L. P. Wong, M. Yoke Hean Low, C. S. Chong, “A Bee Colony Optimization Algorithm for Traveling Salesman Problem,” Second Asia International Conference on Modelling & Simulation, IEEE, 2008.
[7]
L. P. Wong, Y. H. Malcolm Low, C. S. Chong, “Bee Colony Optimization with Local Search for Traveling Salesman Problem,” 2008.
[8]
M. Servet Kiran, H. Iscan, M. Gounduz, “The analysis of discrete artificial bee colony algorithm with neighborhood operator on traveling salesman problem,” Neural Comput and Applic, 2013.
[9]
W. li. Xiang, M. Qing An, “An efficient and robust artificial bee colony algorithm for numerical optimization,” Computers and Operations Research, pp. 1256–1265, 2013.
[10]
S. Saeed, A. Niknafs, “Artificial Bee Colony-Fuzzy Q Learning for Reinforcement Fuzzy Control (Truck Backer-Upper Control Problem),” International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol. 24, No. 1, pp. 123-136, 2016.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186