Navigating the Challenges and Opportunities of AI in Safety-Critical Systems

by | Jul 4, 2024 | Cyber and Digital

The prevalence of artificial intelligence (AI) in today’s climate cannot be ignored, and whilst the opportunities provided by this technology are vast, the challenges are equally significant, particularly when it comes to safety-critical systems.

As AI technologies advance rapidly and integrate more deeply into various sectors, the complexity and non-deterministic behaviour of these systems present significant safety challenges. Ensuring that AI systems operate reliably and safely is paramount, especially in applications where failures can have severe consequences.


Where Are We on the Journey?

AI’s rapid evolution presents numerous challenges in ensuring its safety. Compliance with regulations remains essential and although significant progress has been made in developing safety standards and assurance frameworks, continuous investment in research and adaptation is necessary to keep pace with technological advancements. It is crucial for all stakeholders to stay informed and engaged, develop effective safeguarding measures, and enhance the interpretability and transparency of AI systems.

Whilst safety standards and practices are emerging that discuss the issues around assuring AI in safety related applications, the jury is still out on whether there is a viable route to assuring AI at least for the most critical applications.

The current focus is on improving the robustness of AI systems, ensuring adherence to evolving regulations, and fostering collaboration among stakeholders to comprehensively address safety concerns.


Research in AI for Functional Safety

The body of knowledge surrounding AI is vast and continuously expanding and so we must address the key questions about integrating AI into safety-critical systems.

These questions include:

  • How can rapid changes in AI be kept up with while maintaining a cautious approach to safety?
  • How can AI be leveraged to add value?
    What is the roadmap for releasing AI-based products, especially to the public?
  • How can traditional safety design practices be adapted to account for AI’s unique challenges?
  • What are the assurance considerations for systems using AI in non-safety functions?
  • What defines an AI-knowledgeable team?
  • How can AI add value?
  • Where is AI needed?

And ultimately, is there a viable route to assuring AI for the most critical applications?


Key Concerns in AI Safety

There is a pressing need for guidance to assure AI for safety, address its non-deterministic behaviour, and manage the potential for increased system complexity and undesired behaviours, such as hallucination. Additionally, ensuring AI’s role in design and development, improving current system assurances, and considering AI and human factors are critical. This includes training users to handle errors, dealing with unusual behaviour, and preventing user overload.

Public perception of AI varies, and there are notable concerns about its safety and ethical implications. The civil aviation industry offers a valuable maxim: only introduce a change that provides an operational benefit without compromising safety or improves safety. This principle should guide AI integration across various sectors.


AI Use Considerations

AI’s role in safety-related solutions encompasses implementation of safety functions, real-time monitoring and decision support, as well as assistance with operational procedures. The technology can also be utilised in the development of new safety systems, such as in the design optimisation, verification, validation, and supporting safety assessments. Whilst this can save time and simplify some tasks, it is crucial to consider:

  • The regulation of AI
  • Training and competency of AI for safety application
  • Integration with existing safety frameworks.

It is vital to avoid over-dependence and mitigate risks.


Risks and Opportunities

When it comes to opportunities, there are many that AI presents, including: enhanced design solutions, advanced system features, improved performance, and safety. However, these benefits come with significant risks, such as: hidden mistakes, increased design complexity, unintended behaviours, and potential catastrophic failures.

As our understanding of AI capabilities and challenges grows we will need a step by step approach to assure AI in safety applications. Whilst much progress has been made in applications, such as autonomous driving, we are still a long way off assuring AI for safety critical applications without the need for significant (non-AI based) safeguarding.


What are the known issues with AI?

For starters, much of the current AI research remains theoretical and we need to see more practical case studies to bridge the gap between theory and real-world applications. Not only this, but there are still many gaps in the theoretical research, such as mapping algorithmic intentions to safety requirements, understanding AI’s behavioural characteristics, and tailoring assurance processes for different Safety Integrity Levels.

Issues such as the accuracy of large language models (LLMs) in specific domains, AI’s poor adaptation to real-world scenarios, and susceptibility to adversarial attacks highlight the need for ongoing research and improvement. The unpredictable nature of AI, coupled with its potential for bias, discrimination, and security vulnerabilities, underscores the importance of robust regulatory frameworks and proactive safety measures.

In conclusion, the journey of AI integration, especially in safety-critical systems, is marked by both remarkable advancements and significant challenges. The importance of continuous research, adaptive safety standards, and robust regulatory frameworks cannot be overstated. As AI technologies become more complex and widespread, ensuring their reliable and safe operation is paramount. Stakeholders must remain vigilant, informed, and collaborative, focusing on developing effective safeguarding measures and enhancing system transparency and interpretability. By addressing these challenges head-on and leveraging AI’s vast potential responsibly, society can harness the benefits of AI while mitigating its inherent risks, paving the way for a safer, more innovative future.

Navigating the Challenges and Opportunities of AI in Safety-Critical Systems

by | Jul 4, 2024 | Cyber and Digital

Keep up to date with our latest news on our socials