Standardising AI Webinar

28 January 2025

In 2024, the DRCF published a research paper focused on the third-party AI assurance market, interviewing providers and experts in this sector on how they approach and deliver their work. A key insight was assurance providers felt that to develop sustainable and responsible AI, the ecosystem needs a more standardised approach to the assessment and assurance of AI systems.

 

Given the clear benefits of a more standardised approach to share knowledge and drive best practice in AI development and deployment, the DRCF hosted a webinar in December 2024 to bring together experts to discuss the different mechanisms that support AI assurance. This session focused on AI standards and frameworks, both formal standards such as ISO/IEC 42001 for AI management systems, and non-formal such as NIST’s risk management framework. The sessions looked at the types of standards and frameworks, how they are used, how they might be implemented in line with existing regulation, and accreditation. If you would like to view the recording of the webinar, please email drcf@ofcom.org.uk to request a copy. 

 

The key takeaways have been summarised below, but some overarching themes included the need for more diversity in the standards development process and ensuring that standards are not too burdensome for business to support take-up.

At this exciting time for AI deployment, the DRCF member regulators are keen to continue to leverage our position to bring experts, industry and regulators together, to drive forward the conversation and collaborate for a better and safer digital future.

 

Sessions and panellists

 

Session 1: Types of AI standards and frameworks and how they are developed

·       Ali Hessami – Process Architect and Vice Chair of Ethics Certification Program for Autonomous and Intelligent Systems, IEEE

·       Christopher Thomas – Research Associate on the AI Standards Hub, Alan Turing Institute

·       Gavin Jones – Lead Standards Development Manager, AI and Quantum, British Standards Institute

 

Session 2: AI standards and frameworks in practice

·       Alistair Hunter – Digital Accreditation Specialist, United Kingdom Accreditation Service

·       Pauline Norstrom – CEO, Anekanta AI

·       Rafah Knight – CEO, Secure AI

 

 

Key takeaways

 

·       The UK’s role in standards development: The UK has an established domestic standards development ecosystem, and experts are available to engage with any aspect of the standards development process. This is imperative to ensure the UK is closely involved in developing international AI standards. An example of this is ISO/IEC 42001 on AI management systems, where approximately 130 UK experts reviewed the drafts ahead of agreement at an international level.

 

·       How standards and regulation interact: Technical standards have a historic relationship with regulation and, in the UK, designated formal standards can be referenced as requirements for regulatory compliance. This approach has also been taken by the EU, in the drafting and implementation of the EU AI Act. However, prescribing specific standards in legislation can be challenging as there must be allowance for progress and iteration to keep up with the pace of change. This is why it is important to understand the breadth of systems and contexts, so that the right standards and frameworks are used for differing requirements. 

 

·       Challenges when developing AI standards: The development of standards is complex, including evaluating factors that are fundamentally philosophical and societal, and creating metrics for them that are as objective as possible. While standards have often operated as rigid frameworks, AI demands a more process-based approach so that best practice can be undertaken across different contexts.

 

·       Increasing participation in the development process: As AI deployment and scaling picks up pace, it is an exciting time for standards and framework development. Many people are already engaging with AI across different areas, and it is extremely important that there are a range of voices in the development process. Including diverse voices ensures that the standards and frameworks better reflect the needs and perspectives of all people in the UK.

 

·       AI is impacting every sector: Standardisation and sharing best practice are going to play an important role as businesses deploy AI across a broad range of sectors and for many different use cases. Many firms are using AI to improve efficiency or to support customer interactions. Manufacturing businesses have been using AI to enhance their planning forecasting by adjusting schedules based on predicted weather conditions, and the healthcare industry is exploring innovative ways to safely adopt AI whilst upholding their duty to protect patients and their sensitive information.

 

·       Upskilling technical and non-technical staff on best practice: The most important step in deploying safe and reliable AI systems is to make that everyone that interacts with the systems knows how to do so properly. Ensuring AI systems can be managed safely, standards and frameworks can support staff and improve their knowledge of the systems. Staff may not always recognise how AI systems could expose their organisations to risk (such as information sharing or accuracy), so firms must ensure they provide the right information on how to use their AI systems properly.

 

·       Benefits of eventual AI assurance accreditation: Accreditation in the UK is used by government to assess and accredit organisations that provide services including certification, testing, inspection, calibration, validation and verification. As these are key parts of standardising the AI ecosystem, accreditation can be used to bring certainty and reliability to supply chains and ensure quality. Although it is still in the early stages for AI, there is ongoing work to grow and expand this in the UK. Accreditation can help show consumers or clients that a firm has the technical competence and impartiality processes in place to consistently provide a product that is technically valid. Organisations can certify the quality of products through the global network of accreditation certification, reducing technical barriers to trade.

 

·       Putting people at the heart of AI: Much of the narrative focuses on AI technologies themselves, but focus should be shifted to consider how the people interacting with the systems gain the benefits of AI with as few risks as possible. Instead of viewing AI as inherently good or bad, we should concentrate on the human aspect of AI development, ensuring that people develop and use AI safely, so it leads to positive outcomes for society. To support this, developers and deployers must take appropriate steps to manage and mitigate any misuse of their AI systems.

 

Next steps

 

The DRCF regulators would like to see all firms undertake best practice for AI development and deployment, utilising available standards and frameworks to ensure safe AI and to meet any regulatory requirements. We want to continue raising awareness about all the various ways businesses and consumers can benefit from AI, including through assurance, standards and accreditation. AI is changing the way we live and work, and ultimately, we want to ensure that it is being deployed safely to maximise the benefits and limit potential harms.

 

Back to top