DNV has published a suite of recommended practices (RPs) that will enable companies operating critical devices, assets, and infrastructure to safely apply artificial intelligence (AI).
Strong building blocks, such as data, sensors, algorithms, and digital twins, are required for high-quality AI systems. Each of these digital building pieces is covered by the nine new or upgraded RPs.
The introduction of AI necessitates a new approach to risk. Unlike traditional mechanical or electrical systems, which deteriorate over time, AI-enabled systems alter in milliseconds.
READ: DNV strengthens cyber security portfolio with new acquisition
As a result, a standard DNV certificate with a three to five year validity period might be rendered invalid with each gathered data point.
This needs a new assurance approach as well as a complete grasp of the complicated interplay between system and AI, allowing for an accurate evaluation of failure modes as well as the possibility for real-world performance optimisation.
READ: DNV approves HD KSOE’s hydrogen system
Remi Eriksen, DNV Group President and CEO, said: “Many of our customers are investing significant amounts in AI and AI readiness, but often struggle to demonstrate trustworthiness of the emerging solutions to key stakeholders.
“This is the trust gap that DNV seeks to close with the launch of these recommended practices and which we are publishing ahead of the imminent European Union Artificial Intelligence Act.”
The European Union Artificial Intelligence Act will be the world’s first AI law. The law defines AI very broadly, covering essentially any data-driven system that is deployed in the EU, irrespective of where it is developed or sources its data.