Artificial intelligence must be validated
AI-based autonomous systems are susceptible to sporadic errors. But such errors, triggered by inclement weather or unlearned situations as an example, can have serious consequences. With this in mind, the Fraunhofer Institute for Cognitive Systems IKS researches and develops methods for the automated validation of artificial intelligence (AI) technologies and autonomous systems. The goal is to be able to switch to a safe state at any time, without interrupting or deactivating the system, despite the occurrence of expected or unexpected events.