Artificial intelligence cannot be validated with conventional methods
Artificial intelligence (AI) quality depends to a large extent on the quality of the training data. The better the training data, the better the AI. Even if an extensive set of training data is available, you cannot assume that every critical situation will be covered.
Furthermore, minor changes in the environment such as a dirty sensor or unfavorable weather conditions can heavily and unpredictably influence AI-based decision-making. This leads to an infinite number of possible situation-based input values in the neural network. Because minor variations in the input values can lead to different classifications, predicting the behavior becomes an impossible task. To date, the only way to determine exactly what the neural network learned, has been complex, time-consuming observation.
This leads to a situation in which the decision quality of the AI technology cannot be verified with formal methods. Currently-available approaches for self-estimation of the dependability of an AI system are still not technically mature. Furthermore, artificial intelligence cannot be validated using conventional methods.