Branch coverage prediction in automated testing. Issue 9 (8th March 2019)
- Record Type:
- Journal Article
- Title:
- Branch coverage prediction in automated testing. Issue 9 (8th March 2019)
- Main Title:
- Branch coverage prediction in automated testing
- Authors:
- Grano, Giovanni
Titov, Timofey V.
Panichella, Sebastiano
Gall, Harald C. - Other Names:
- Ampatzoglou Apostolos guestEditor.
Fontana Francesca Arcelli guestEditor.
Palomba Fabio guestEditor.
Walter Bartosz guestEditor. - Abstract:
- Abstract: Software testing is crucial in continuous integration (CI). Ideally, at every commit, all the test cases should be executed, and moreover, new test cases should be generated for the new source code. This is especially true in a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline. In this context, developers want to achieve a certain minimum level of coverage for every software build. However, executing all the test cases and, moreover, generating new ones for all the classes at every commit is not feasible. As a consequence, developers have to select which subset of classes has to be tested and/or targeted by test‐case generation. We argue that knowing a priori the branch coverage that can be achieved with test‐data generation tools can help developers into taking informed decision about those issues. In this paper, we investigate the possibility to use source‐code metrics to predict the coverage achieved by test‐data generation tools. We use four different categories of source‐code features and assess the prediction on a large data set involving more than 3'000 Java classes. We compare different machine learning algorithms and conduct a fine‐grained feature analysis aimed at investigating the factors that most impact the prediction accuracy. Moreover, we extend our investigation to four different search budgets. Our evaluation shows that the best model achieves an averageAbstract: Software testing is crucial in continuous integration (CI). Ideally, at every commit, all the test cases should be executed, and moreover, new test cases should be generated for the new source code. This is especially true in a Continuous Test Generation (CTG) environment, where the automatic generation of test cases is integrated into the continuous integration pipeline. In this context, developers want to achieve a certain minimum level of coverage for every software build. However, executing all the test cases and, moreover, generating new ones for all the classes at every commit is not feasible. As a consequence, developers have to select which subset of classes has to be tested and/or targeted by test‐case generation. We argue that knowing a priori the branch coverage that can be achieved with test‐data generation tools can help developers into taking informed decision about those issues. In this paper, we investigate the possibility to use source‐code metrics to predict the coverage achieved by test‐data generation tools. We use four different categories of source‐code features and assess the prediction on a large data set involving more than 3'000 Java classes. We compare different machine learning algorithms and conduct a fine‐grained feature analysis aimed at investigating the factors that most impact the prediction accuracy. Moreover, we extend our investigation to four different search budgets. Our evaluation shows that the best model achieves an average 0.15 and 0.21 MAE on nested cross‐validation over the different budgets, respectively, onEVOSUITE andRANDOOP . Finally, the discussion of the results demonstrate the relevance of coupling‐related features for the prediction accuracy. Abstract : In this paper, we predict the coverage achieved by test‐data generator tools using source‐code metrics. We build a Random Forest Regressor model with an average MAE of 0.2. This results substantially improve the performance of the state‐of‐art predictor. … (more)
- Is Part Of:
- Journal of software. Volume 31:Issue 9(2019)
- Journal:
- Journal of software
- Issue:
- Volume 31:Issue 9(2019)
- Issue Display:
- Volume 31, Issue 9 (2019)
- Year:
- 2019
- Volume:
- 31
- Issue:
- 9
- Issue Sort Value:
- 2019-0031-0009-0000
- Page Start:
- n/a
- Page End:
- n/a
- Publication Date:
- 2019-03-08
- Subjects:
- automated software testing -- coverage prediction -- machine learning -- software testing
Software engineering -- Periodicals
Computer software -- Development -- Periodicals
Software maintenance -- Periodicals
005.1 - Journal URLs:
- http://onlinelibrary.wiley.com/journal/10.1002/(ISSN)2047-7481 ↗
http://onlinelibrary.wiley.com/ ↗ - DOI:
- 10.1002/smr.2158 ↗
- Languages:
- English
- ISSNs:
- 2047-7473
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 11872.xml