In AI, the algorithm is not coded but produced by a combination of training data, labelling (concepts) and the neural network. This is the essence of machine learning. The algorithm is not directly insightful and cannot be bug-fixed directly: it is “black box development”. AI systems are used in contexts with diverse data and usage. Choice in training data and labels brings risks in bias and transparency with possible high impact on real people. Testing AI focusses on these risks. An AI tester needs moral, social and worldly intelligence and awareness to bring out the users, their expectations and translate these in test cases that can be run repetitively and automated. AI testing includes setting up metrics that translate test results in a meaningful and quantifiable evaluation of the system in order for developers to optimize the system.
CITATION STYLE
Numan, G. (2019). Testing Artificial Intelligence. In The Future of Software Quality Assurance (pp. 123–136). Springer International Publishing. https://doi.org/10.1007/978-3-030-29509-7_10
Mendeley helps you to discover research relevant for your work.