A safety assistant based on generative AI
Artificial intelligence (AI) is enabling exponential growth in many industries.
Companies want to exploit the economic potential of this technology, but especially in safety-critical systems, AI-based and AI-generated functions raise significant safety concerns. Automation in safety-critical environments, especially in manufacturing, faces the challenge of reconciling technological innovation with stringent safety requirements.
While automation technologies are making rapid progress through the use of artificial intelligence (AI) and in particular LLM (large language models) and conversational AI, processes are becoming increasingly complex and security standards are lagging behind. Although AI functions are very powerful, they can hardly provide any security guarantees.
AI assistants and large language models are becoming increasingly important.
Historically, security innovation has always been one step behind technological advances. This gap was manageable in the past, but with the exponential acceleration of innovation, the gap between technological progress and safety assurance is growing. The work of safety engineers is becoming increasingly difficult and companies are more often faced with the choice of either slowing down innovation or accepting risk, neither of which are viable solutions. This is where a safety assistant based on Generative AI could provide a solution.
Fraunhofer IKS is pursuing a new approach to closing the gap between innovation and safety:
We want to support the work of safety engineers by using generative AI (GenAI). This is because generative AI can generate creative solutions and automate large parts of the safety engineering workflow, such as hazard and risk analysis (HARA) and the derivation of safety requirements. This enables engineers to create robust safety verifications even for complex applications, for example for collaborative manufacturing robots.
At Fraunhofer IKS, we are evaluating this methodology by applying it to a collaborative assembly process.
In this scenario, a robotic arm works together with a human to complete assembly tasks, supported by an autonomous mobile robot that delivers components. A camera is used to secure the work area. A signal light indicates the safety status and the robot automatically pauses when the human is working. Once the human task is completed, the robot continues the assembly safely.
Our preliminary tests show that Generative AI can significantly improve the creation of safety documentation by automating the organization and structuring of these documents to increase clarity and compliance. In addition, generative methods can increase the efficiency and thoroughness of a HARA by identifying potential hazards in different categories. These methods can also suggest and compare potential safety measures, derive safety requirements and identify gaps in existing documentation.
While the current generation of generative AI is not yet reliable enough to perform these tasks independently, it can help safety experts by reducing the burdon of repetitive tasks.
To ensure system security, it is crucial to validate the planned security measures and the correct implementation of the security requirements. In our use case, we are evaluating how generative AI can support the planning of these activities. For example, it can help specify simulation tests that examine the system's response to various safety-related scenarios identified in the HARA, including human presence and component failures.
Generative AI represents a paradigm shift in safety engineering. It can close the economic safety gap and enable the implementation of AI-powered automation solutions in safety-critical applications. Our approach shows that AI safety assistants can be used under the supervision of human safety experts to adapt safety protocols more quickly to technological advances and thus reduce delays.
In the future, we want to further develop and expand this methodology and transfer it to other safety-critical systems. In addition, we want to support engineers in the subsequent creation of robot behavior based on the generated safety requirements. This would make it possible to use AI to directly create control programs that meet all safety requirements and ensure seamless and safe integration of automation solutions.