Machine Learning as a Service platform is a very sensible choice for practitioners that want to incorporate machine learning to their products while reducing times and costs. However, to benefit their advantages, a method for assessing their performance when applied to a target application is needed. In this work, we present a robust uncertainty-based method for evaluating the performance of both probabilistic and categorical classification black-box models, in particular APIs, that enriches the predictions obtained with an uncertainty score. This uncertainty score enables the detection of inputs with very confident but erroneous predictions while protecting against out of distribution data points when deploying the model in a productive setting. We validate the proposal in different natural language processing and computer vision scenarios. Moreover, taking advantage of the computed uncertainty score, we show that one can significantly increase the robustness and performance of the resulting classification system by rejecting uncertain predictions.
CITATION STYLE
Mena, J., Pujol, O., & Vitria, J. (2020). Uncertainty-Based Rejection Wrappers for Black-Box Classifiers. IEEE Access, 8, 101721–101746. https://doi.org/10.1109/ACCESS.2020.2996495
Mendeley helps you to discover research relevant for your work.