Revisiting the conclusion instability issue in software effort estimation

2Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.

Abstract

Conclusion instability is the absence of observing the same effect under varying experimental conditions. Deep Neural Network (DNN) and ElasticNet software effort estimation (SEE) models were applied to two SEE datasets with the view of resolving the conclusion instability issue and assessing the suitability of ElasticNet as a viable SEE benchmark model. Results were mixed as both model types attain conclusion stability for the Kitchenham dataset whilst conclusion instability existed in the Desharnais dataset. ElasticNet was outperformed by DNN and as such it is not recommended to be used as a SEE benchmark model.

Cite

CITATION STYLE

APA

Bosu, M. F., Mensah, S., Bennin, K., & Abuaiadah, D. (2018). Revisiting the conclusion instability issue in software effort estimation. In Proceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE (Vol. 2018-July, pp. 368–371). Knowledge Systems Institute Graduate School. https://doi.org/10.18293/SEKE2018-126

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free