AI Act workshops
A proactive confirmation assessment of AI systems provides guidelines for design, development and use of an AI system, will mitigate the risks of AI failures and can prevent reputational and financial damage by avoiding liability issues. Our AI Act workshops offer systematic support for compliance of your AI system and include the following topics:
- Establish continuous risk management system by identifying, analyzing and estimating known and foreseeable risks and providing mitigation measures
- Ensure high-quality training, validation and testing data minimizing discrimination and bias in data sets
- Provide methods for design and evaluation of ML models tailored to your application requirements (e.g., accuracy, robustness, prediction certainty, transparency and explainability)
- Integrate monitoring and logging capabilities and provide automated technical documentation
- Ensure post-market monitoring and verification of safety and performance properties during the whole lifecycle
- Support the registration in EU’s future database on high-risk AI systems
Training course "Machine Learning for Safety Experts"
Our training course "Machine Learning for Safety Experts" covers the fundamental principles of arguing the safety of automotive functions that make use of machine learning technologies. In particular, the course focuses on the impact of machine learning on the “safety of the intended functionality” as described by the standard ISO 21448. The course includes the following topics and can be adapted to your use case and requirements.
- Introduction to machine learning on the basis of an example function and publicly available data
- Safety challenges of machine learning
- Short introduction to relevant safety standards and their impact on machine learning
- Safety lifecycle for machine learning functions
- Derivation of safety requirements for ML functions with particular focus on the definition of safety-related properties such as accuracy, robustness, prediction certainty, transparency and explainability
- The impact of training and test data on safety
- Methods for evaluating the performance of the ML function against its safety requirements
- Safety analysis applied to machine learning
- Architectural measures to improve the safety of ML functions
- Design-time and operation time methods for ensuring the safety of ML functions
- Assurance arguments for machine learning
Are you interested in a workshop or training? Contact us directly to discuss your needs: bd@iks.fraunhofer.de