Google scholar arxiv informatics ads IJAIS publications are indexed with Google Scholar, NASA ADS, Informatics et. al.

Call for Paper

-

November Edition 2021

International Journal of Applied Information Systems solicits high quality original research papers for the November 2021 Edition of the journal. The last date of research paper submission is October 15, 2021.

A Productive Method for Improving Test Effectiveness

Saran Prasad, Mona Jain, Shradha Singh, C. Patvardhan Published in

International Journal of Applied Information Systems
Year of Publication 2012
© 2010 by IJAIS Journal
10.5120/ijais12-450263
Download full text
  1. Saran Prasad, Mona Jain, Shradha Singh and C.patvardhan. Article: A Productive Method for Improving Test Effectiveness. International Journal of Applied Information Systems 2(2):9-17, May 2012. BibTeX

    @article{key:article,
    	author = "Saran Prasad and Mona Jain and Shradha Singh and C.patvardhan",
    	title = "Article: A Productive Method for Improving Test Effectiveness",
    	journal = "International Journal of Applied Information Systems",
    	year = 2012,
    	volume = 2,
    	number = 2,
    	pages = "9-17",
    	month = "May",
    	note = "Published by Foundation of Computer Science, New York, USA"
    }
    

Abstract

Automated testing of software products has greatly expanded over the past few years. Ever increasing test suites have been developed, along with the computing infrastructure to support them. While the capacity for testing has grown, the environment is not infinitely scalable - eventually capital spending is capped. Methodologies need to be explored that improve the overall effectiveness of the test cases that are run. Furthermore, these methodologies need to be as independent from the test suites as possible: the size of the test suites render solutions that are tightly bound to them ineffective for widespread utilization. Problem associated with these huge numbers of test cases is that whenever the code is changed, the entire suites of test cases need to be run. One idea is to run fewer tests on an ongoing basis, reserving full regression test runs for key milestones in the development lifecycle. This is workable if the limited tests produce a similar result in the short term. In this paper, we present a new approach for test suite selection that focuses on improving test effectiveness. The methodology described produces a pruned list of test cases required to test an application. The method has three components, the predictive component which makes use of statistical data, coverage based method digs the delta from the code to produce a pruned list of test cases, and decision based technique that prioritizes important test cases. Our experiments show that our approach results in a better utilization of compute resources and also decreases validation cycle thus reducing time to market.

Reference

  1. Praveen Ranjan Srivastava, TEST CASE PRIORITIZATION. Computer Science and Information System Group, BITS Pilani, India-333031. Journal of Theoretical and Applied Information Technology, ©2005-2008 JATIT. All rights reserved. Link: www. jatit. org
  2. S. Elbaum, A. Malishevsky, and G. Rothermel Test case prioritization: A family of empirical studies. IEEE Transactions on Software Engineering, February 2002.
  3. G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, "Prioritizing Test Cases for Regression Testing," IEEE Trans. Software Eng. , vol. 27, no. 10, pp. 929-948, Oct. 2001
  4. Aditya P. Mathur, Foundation of software testing, Pearson Education 1st edition.
  5. Maruan Khoury, Cost-Effective Regression Testing, 2006.
  6. Emanuela G. Cartaxo, Francisco G. O. Neto, Patr?cia D. L. Machado Automated Test Case Selection Based on a Similarity Function,{emanuela,netojin,patricia}@dsc. ufcg. edu. br.
  7. C. Jard and T. J´eron. TGV: theory, principles and algorithms, A tool for the automatic synthesis of conformance test cases for non-deterministic reactive systems. Software Tools for Technology Transfer (STTT), 6, 2004.
  8. F. Basanieri, A. Bertolino, and E. Marchetti. The Cow Suite Approach to Planning and Deriving Test Suites in UML Projects. In UML 2002 - The Unified Modeling Language. Model Engineering, Languages, Concepts, and Tools. LNCS. Springer, 2002.
  9. D. L. Barbosa, H. S. Lima, P. D. L. Machado, J. C. A. Figueiredo, M. A. Juca, and W. L. Andrade. Automating Functional Testing of Components from UML Specifications. Int. Journal of Software Eng. and Knowledge Engineering, 2007.
  10. SIRIPONG ROONGRUANGSUWAN, JIRAPUN DAENGDEJ. TEST CASE PRIORITIZATION TECHNIQUES. Journal of Theoretical and Applied Information Technology, © 2005 - 2010 JATIT & LLS. All rights reserved. www. jatit. org.
  11. B. Korel and J. Laski, "Algorithmic software fault localization", Annual Hawaii International Conference on System Sciences, pages 246–252, 1991.
  12. Cem Kaner, "Exploratory Testing", Florida Institute of Technology, Quality Assurance Institute Worldwide Annual Software Testing Conference, Orlando, FL, 2006.
  13. Dennis Jeffrey and Neelam Gupta, "Test Case Prioritization Using Relevant Slices", In Proceedings of the 30th Annual International Computer Software and Applications Conference,Volume 01, 2006, pages 411-420, 2006.
  14. A. Srivastava, A. Edwards, and H. Vo. , "Vulcan: Binary Transformation in a Distributed Environment", Microsoft Research Technical Report, MSR-TR-2001-50, 2001.
  15. Alexey G. Malishevsky, Gregg Rothermel and Sebastian Elbaum, "Modeling the Cost-Benefits Tradeoffs for Regression Testing Techniques", Proceedings of the International Conference on Software Maintenance (ICSM'02), 2002.
  16. James A. Jones and Mary Jean Harrold, "Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage", In Proceedings of the International Conference on Software Maintenance, 2001.

Keywords

Software Testing, Regression Testing, Test Case Prioritization, Test-case Selection