Technical Report
Track updates
iconCreated with Sketch.

SA TR ISO/IEC 5469:2024

[Current]

Artificial intelligence - Functional safety and AI systems

SA TR ISO/IEC 5469:2024 identically adopts ISO/IEC TR 5469:2024, which describes the properties, related risk factors, available methods and processes relating to the use of AI inside a safety related function to realize the functionality, the use of non-AI safety related functions to ensure safety for an AI controlled equipment and the use of AI systems to design and develop safety related functions
Published: 03/05/2024
Pages: 78
Table of contents
Cited references
Content history
Table of contents
Header
About this publication
Preface
Foreword
Introduction
1 Scope
2 Normative references
3 Terms and definitions
4 Abbreviated terms
5 Overview of functional safety
5.1 General
5.2 Functional safety
6 Use of AI technology in E/E/PE safety-related systems
6.1 Problem description
6.2 AI technology in E/E/PE safety-related systems
7 AI technology elements and the three-stage realization principle
7.1 Technology elements for AI model creation and execution
7.2 The three-stage realization principle of an AI system
7.3 Deriving acceptance criteria for the three-stage of the realization principle
8 Properties and related risk factors of AI systems
8.1 Overview
8.1.1 General
8.1.2 Algorithms and models
8.2 Level of automation and control
8.3 Degree of transparency and explainability
8.4 Issues related to environments
8.4.1 Complexity of the environment and vague specifications
8.4.2 Issues related to environmental changes
8.4.2.1 Data drift
8.4.2.2 Concept drift
8.4.3 Issues related to learning from environment
8.4.3.1 Reward hacking algorithms
8.4.3.2 Safe exploration
8.5 Resilience to adversarial and intentional malicious inputs
8.5.1 Overview
8.5.2 General mitigations
8.5.3 AI model attacks: adversarial machine learning
8.6 AI hardware issues
8.7 Maturity of the technology
9 ​Verification and validation techniques
9.1 Overview
9.2 Problems related to verification and validation
9.2.1 Non-existence of an a priori specification
9.2.2 Non-separability of particular system behaviour
9.2.3 Limitation of test coverage
9.2.4 Non-predictable nature
9.2.5 Drifts and long-term risk mitigations
9.3 Possible solutions
9.3.1 General
9.3.1.1 Directions for risk mitigation
9.3.1.2 AI metrics and safety verification and validation
9.3.2 Relationship between data distributions and HARA
9.3.3 Data preparation and model-level validation and verification
9.3.4 Choice of AI metrics
9.3.5 System-level testing
9.3.6 Mitigating techniques for data-size limitation
9.3.7 Notes and additional resources
9.4 Virtual and physical testing
9.4.1 General
9.4.2 Considerations on virtual testing
9.4.3 Considerations on physical testing
9.4.4 ​Evaluation of vulnerability to hardware random failures
9.5 ​Monitoring and incident feedback
9.6 A note on explainable AI
10 Control and mitigation measures
10.1 Overview
10.2 AI subsystem architectural considerations
10.2.1 Overview
10.2.2 Detection mechanisms for switching
10.2.3 Use of a supervision function with constraints to control the behaviour of a system to within safe limits
10.2.4 Redundancy, ensemble concepts and diversity
10.2.5 AI system design with statistical evaluation
10.3 Increase the reliability of components containing AI technology
10.3.1 Overview of AI component methods
10.3.2 Use of robust learning
10.3.3 Optimization and compression technologies
10.3.4 Attention mechanisms
10.3.5 Protection of the data and parameters
11 Processes and methodologies
11.1 General
11.2 Relationship between AI life cycle and functional safety life cycle
11.3 AI phases
11.4 Documentation and functional safety artefacts
11.5 Methodologies
11.5.1 Overview
11.5.2 Fault models
11.5.3 PFMEA for offline training of AI technology
Annex A
A.1 Overview
A.2 Analysis of applicability of techniques and measures in IEC 61508-3:2010 Annexes A and B to AI technology elements
Annex B
B.1 Overview
B.2 Example for an automotive use case
B.3 Example for a robotics use case
Annex C
C.1 General
C.2 Data distribution and HARA
C.3 Coverage of data for identified risks
C.4 Data diversity for identified risks
C.5 Reliability and robustness
Annex D
Bibliography
Cited references in this standard
Content history