Please enter verification code
Confirm
An Empirical Study on the Effectiveness of Automated Test Case Generation Techniques
American Journal of Software Engineering and Applications
Volume 3, Issue 6, December 2014, Pages: 95-101
Received: Nov. 18, 2014; Accepted: Dec. 3, 2014; Published: Dec. 23, 2014
Views 2891      Downloads 250
Authors
Bolanle F. Oladejo, Department of Computer Science, University of Ibadan, Ibadan, Nigeria
Dimple T. Ogunbiyi, Department of Computer Science, University of Ibadan, Ibadan, Nigeria
Article Tools
Follow on us
Abstract
The advent of automated test case generation has helped to reduce the laborious task of generating test cases manually and is prominent in the software testing field of research and as a result, several techniques have been developed to aid the generation of test cases automatically. However, some major currently used automated test case generation techniques have not been empirically evaluated to ascertain their performances as many assumptions on technique performances are based on theoretical deductions. In this paper, we perform experiment on two major automated test case generation techniques (Concolic test case generation technique and the Combinatorial test case generation technique) and evaluate based on selected metrics (number of test cases generated, complexities of the selected programs, the percentage of test coverage and performance score). The results from the experiment show that the Combinatorial technique performed better than the Concolic technique. Hence, the Combinatorial test case generation technique was found to be more effective than the Concolic test case generation technique based on the selected metrics.
Keywords
Automated Test Case Generation Technique, Combinatorial, Concolic, Empirical Study, Software Testing, Software Metrics
To cite this article
Bolanle F. Oladejo, Dimple T. Ogunbiyi, An Empirical Study on the Effectiveness of Automated Test Case Generation Techniques, American Journal of Software Engineering and Applications. Vol. 3, No. 6, 2014, pp. 95-101. doi: 10.11648/j.ajsea.20140306.15
References
[1]
S. Anand, E. Burke, T. Y. Chen, J. Clark, M. B. Cohen, W. Grieskamp, M. Harman, M. J. Harrold, and P. McMinn, “An orchestrated survey on automated Software test case generation,” Antonia Bertolino, J. Jenny Li and Hong Zhu, Editor/Orchestrators, Journal of Systems and Software 2013.
[2]
J. E. Bentley, “Software testing fundamentals-concepts, roles, and terminology,” Corporate Data Management and Governance, Wachovia Bank, 201 S. College Street, NC-1025, Charlotte NC 28210, 2001.
[3]
J. Czerwonka, “Pairwise testing in real World: practical extensions to test case generators,” Microsoft Corporation, One Microsoft way Redmond, WA 98052, 2006.
[4]
M. d'Amorim, C. Pacheco, T. Xie, D. Marinov, and M. D. Ernst, “An empirical comparison of automated generation and classification techniques for object-oriented unit testing,” Department of Computer Science, University of Illinois, Urbana-Champaign, IL, U.S.A., 2006.
[5]
G. Fraser, M. Staats, P. McMinn, A. Arcuri, and P. Padberg, “Does automated White-Box test generation really help Software Testers,?” Department of Computer Science, University of Sheffield, United Kingdom, 2013.
[6]
S. Han and Y. Kwon, “An empirical evaluation of test data generation techniques.” Journal of Computing Science and Engineering, vol. 2, No. 3, September, 2008.
[7]
K. Kahkonen, R. Kindermann, K. Heljanko and I. Niemela, “Experimental comparison of Concolic and Random Testing for Java Card Applets,” Department of Information and Computer Science Aalto University, P.O. Box 15400, FI-00076 AALTO, Finland, 2010.
[8]
B. Korel, “Automated Software test data generation,” IEEE Transactions on Software Engineering, Vol. 16, No. 8, 1990.
[9]
D. R. Kuhn, R. N. Kacker, and Y. Lei, “Practical Combinatorial Testing,”. National Institute of Standards and Technology (NIST), U.S. Government Printing Office, Washington, U.S.A., 2010.
[10]
K. Lakhotia, P. McMinn, and M. Harman, “Automated test data generation for coverage: haven’t we solved this problem yet?,” King’s College, CREST centre, London,WC2R 2LS, U.K., 2009.
[11]
L. Luo, “Software Testing Techniques,” Institute for Software Research International, Carnegie Mellon University, Pittsburgh, PA15232, U.S.A., 2001.
[12]
T. J. McCabe, “A complexity measure,” IEEE Transactions on Software Engineering, Vol. Se-2, No., 4, 1976.
[13]
J. Pan, “Software Testing, Dependable Embedded Systems,” Electrical and Computer Engineering Department, Carnegie Mellon University, 1999.
[14]
X. Qu, and B. Robinson, “A case study of Concolic Testing tools and their limitations,” ABB Corporate Research 940 main campus drive, Raleigh, NC, U.S.A., 2010.
[15]
M. Roper, J. Miller, A. Brooks, and M. Wood, “Towards the experimental evaluation of Software testing techniques,” EuroSTAR ’94, pp 44/1-44/10October 10-13, 1994, Brussels.
[16]
K. Sen, “Concolic testing and constraint satisfaction,” Proceedings, 14th International Conference on Theory and Applications of Satisfiability Testing (SAT’11), 2011.
[17]
S. Wang, and J. Offutt, “Comparison of unit-level automated test generation tools,” Software Engineering, George Mason University, Fairfax, VA 22030, USA, 2008.
ADDRESS
Science Publishing Group
1 Rockefeller Plaza,
10th and 11th Floors,
New York, NY 10020
U.S.A.
Tel: (001)347-983-5186