The example of autonomous driving can be used to illustrate the three criteria:
A self-driving automobile has to operate safely in traffic without causing accidents. At the same time, it has to be dependable from an availability standpoint – meaning it has to remain usable. Although an autonomous vehicle that fails to operate in all uncertain situations, and instead sits on the side of the road, is safe, it’s not dependable.
This raises the additional question of the costs. Autonomous vehicles have to undergo millions of kilometers of testing while being monitored. This approach is especially popular in the US, where it is used to acquire as much as experience as possible with operation of the vehicle. The number of test kilometers alone makes these tests extremely complicated however and requires considerable time and money. For example, every possible scenario has to be tested, such as different incidences of light, weather conditions and situations in the vehicle’s surroundings. Furthermore, every single software change, such as an update or a new version, has to be verified again with extensive road tests. This method alone will not lead to the success of autonomous driving.
With this in mind, AI-based methods and cognitive systems have to be validated with a corresponding system and software architecture, which forms a protective framework in which wrong decisions by the AI technology will not cause any damage.