Conclusion instability is the absence of observing the same effect under varying experimental conditions. Deep Neural Network (DNN) and ElasticNet software effort estimation (SEE) models were applied to two SEE datasets with the view of resolving the conclusion instability issue and assessing the suitability of ElasticNet as a viable SEE benchmark model. Results were mixed as both model types attain conclusion stability for the Kitchenham dataset whilst conclusion instability existed in the Desharnais dataset. ElasticNet was outperformed by DNN and as such it is not recommended to be used as a SEE benchmark model.
CITATION STYLE
Bosu, M. F., Mensah, S., Bennin, K., & Abuaiadah, D. (2018). Revisiting the conclusion instability issue in software effort estimation. In Proceedings of the International Conference on Software Engineering and Knowledge Engineering, SEKE (Vol. 2018-July, pp. 368–371). Knowledge Systems Institute Graduate School. https://doi.org/10.18293/SEKE2018-126
Mendeley helps you to discover research relevant for your work.