Coincidentally correct test cases are those that execute faulty statements but do not cause failures. Such test cases reduce the effectiveness of spectrum-based fault localization techniques, such as Ochiai, because the correlation of failure with the execution of a faulty statement is lowered. Thus, coincidentally correct test cases need to be predicted and removed from the test suite used for fault localization. Techniques for predicting coincidentally correct test cases can produce false positives, such as when one predicts a fixed percentage that is higher than the actual percentage of coincidentally correct test cases. False positives may cause non-faulty statements to be assigned higher suspiciousness scores than the faulty statements. We propose an approach that iteratively predicts and removes coincidentally correct test cases. In each iteration, we present the tester the set of statements that share the highest Ochiai suspiciousness score. If the tester reports that these statements are not faulty, we use that feedback to determine a number that is guaranteed to be less than or equal to the actual number of coincidentally correct test cases. We predict and remove that number of coincidentally correct test cases, recalculate the suspiciousness scores of the remaining statements, and repeat the process. We evaluated our approach with the Siemens benchmark suite and the Unix utilities, grep and gzip. Our approach outperformed an existing approach that predicts a fixed percentage of test cases as coincidentally correct. The results with Ochiai were mixed. In some cases, our approach outperformed Ochiai by up to 67%. In others, Ochiai was more effective.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below