AI Assurance: Highlighting opportunities across DRCF regulatory regimes

28 January 2025

The DRCF has a longstanding interest in AI assurance. Building on our 2022 report on auditing algorithms, the DRCF expanded its research in 2024 by publishing findings on third-party firms that provide AI assurance services such as testing AI against specific benchmarks. This research enhanced our understanding of the emerging market for AI assurance and provided insights into the nature of services being provided, including why and how their clients are using them.

There is a variety of AI assurance mechanisms available, such as considering the potential risks and societal impacts of systems, testing how well a system performs against a specific technical metric, or testing conformity with a particular legal requirement. While progress has been made to meaningfully evaluate the impact of AI systems in different contexts, we acknowledge that challenges remain for those seeking to assure the functionality, safety and legality of AI systems they are developing or using.

While the terms “assurance” and “audit” are at times used interchangeably to describe forms of AI assessment, these processes have distinct purposes. The Department for Science, Innovation and Technology (DSIT) describes assurance as “the process of measuring, evaluating and communicating something about a system or process, documentation, a product or an organization”, and states that AI assurance “measures, evaluates and communicates the trustworthiness of AI systems”. The report discusses a spectrum of AI assurance mechanisms which includes some forms of audit, such as bias audits and compliance audits. The term “audit” refers to an assessment made to determine whether a firm is compliant with laws, industry standards or internal policies.  

In the AI context the report therefore states that assurance helps measure, evaluate and communicate the trustworthiness, accuracy and impact of AI systems. Auditing refers to an assessment made to determine whether a firm is compliant with laws, industry standards, or internal policies.

Ensuring the reliability of AI assurance is crucial to enabling consumers and businesses across the UK to safely benefit from AI innovation. It is therefore essential for AI assurance to evolve in alignment with current and potential future regulatory expectations. In this article, we aim to highlight the opportunities for firms to undertake AI assurance relevant to the remits of the DRCF regulators.

AI assurance opportunities across DRCF regimes

DRCF regulators’ remits cover numerous applications of AI systems. Each of the DRCF member regulators has powers to investigate and hold firms accountable for how they deploy or develop their AI systems, to ensure compliance with existing legal provisions. To assure and demonstrate compliance, firms may wish to undertake or commission AI assurance assessments, reports and analysis. Below, each of our member regulators discuss expectations in AI assurance relevant to their regulatory remits.

Ofcom: AI assurance opportunities within Online Safety

A key objective of the Online Safety Act 2023 is that online services are designed and operated in a way that protects users from harm, including harm caused or amplified by AI systems. Online service providers may use AI systems for a range of safety-enhancing purposes, for example to power content moderation algorithms and age estimation technologies. The Online Safety Act also requires online services to assess the risk of illegal content present on their service and to consider how the AI systems (or more simple algorithmic systems) it deploys might spread or amplify illegal content or content which is harmful to children.

Ofcom has published Risk Assessment Guidance which sets out how services can comply with these duties. One of the recommendations is that online service providers consider the outcomes of external assessment or other risk assurance processes. This could present an opportunity for online services to utilise third-party AI assurance providers, which can enhance confidence that the online service providers’ risk management processes relating to AI systems are robust. Under the Risk Assessment Guidance, this is referred to as “enhanced inputs”. These independent third-party assessments can provide insight and analysis which online services providers might be unable to produce themselves. Benefits may include providing greater independence and granularity of detail which could enhance the quality and accuracy of the risk assessment. These services may also offer testing for service providers that lack the in-house capacity to carry out risk assessment processes on their own.

Ofcom: AI assurance opportunities for Telecoms and Network Security

Ofcom has set ‘general conditions’ for telecoms providers as part of its duties under the Communications Act 2003. AI can enhance the sophistication of scam calls and messages, and Ofcom has the power under its general conditions to instruct providers to block access to those numbers or services on the basis of fraud or misuse. Under the Telecommunications (Security) Act 2021, Ofcom has a duty to ensure that telecoms providers take appropriate and proportionate measures to identify, reduce and prepare for security risks. Providers can use AI to monitor for abnormal activities to detect and address issues more efficiently, identify potential vulnerabilities as well as conduct predictive risk analysis.

Under the Network and Information Systems (NIS) Regulations, Operators of Essential Services (OES) in the digital infrastructure sector, that are regulated under NIS, are also expected to manage cyber security risks. NCSC's Cyber Assessment Framework (CAF) is referred to in the Government’s NIS national strategy that applies for the NIS Regulations. Therefore, Ofcom’s Revised NIS Guidance (2023) recommends that OES have regard to that strategy (including the CAF) when planning, implementing, monitoring or updating any technical and organisational measures they take to comply with their security duties under the NIS Regulations. In this context, it is noted that one of the CAF’s Indicators of Good Practice (B4.b) specifically references the need to understand automated decision-making technologies where they are used, and to be able to replicate the technology’s decisions. AI assurance services could potentially be used to help in this regard. Note that the expectation under the Telecoms Security Act and NIS for cyber security risk governance and management applies to all technologies, regardless of whether they are AI-powered.

ICO: Evaluating data controllers and processors’ compliance with data protection law

The ICO is a cross-economy, horizontal regulator; the potential examples of how the firms and public sector organisations it regulates make use of AI systems are therefore wide-ranging. The ICO regulates AI across its supply chain as data protection applies whenever organisations use personal data to train, fine-tune, or deploy AI. ICO’s expectations are set out in its guidance on AI and data protection and have been incorporated in its  AI toolkit and data protection audit framework.

In the context of the ICO’s remit, third-party assurance involves evaluating data controllers’ and processors’ compliance with data protection law. The use of third-party audits could build and demonstrate trust, form a key part of privacy management, and benchmark existing practices against the ICO’s expectations.

Areas where third-party AI assurance would be useful for the purposes of technical assessment and testing of AI models and source code include:

  • Auditing of statistical accuracy of the model.
  • Analysing source code and AI system compliance with the exercise of people’s information rights
  • Reviewing and testing for system bias, including discrimination analysis.
  • Analysing data flows and deletion
  • Testing accuracy of data labelling, consideration of edge cases and use of special category data.

The ICO has recently conducted a programme of voluntary audits with organisations that develop or provide AI systems used in recruitment. These audits found some considerable areas for improvement in data protection compliance and management of AI-derived risks.

The ICO looks for evidence of external audit or accreditation as part of its audits to provide independent assurance of the effectiveness of the control environment. It has, in the past, requested that developers or clients / customers commission an independent external audit of systems as part of their investigation. It has also seen instances of AI developers undertaking ISO27001/2 certification of their systems. This is a standard for establishing, implementing, maintaining and continually improving information security management systems.

Conformity with ISO/IEC 27001 means that an organization or business has put in place a system to manage risks related to the security of data owned or handled by the company, and that this system respects all the best practices and principles enshrined in this International Standard.

CMA: AI assurance in the context of competition and consumer protection   

The CMA has been building its significant AI-related expertise to evaluate, investigate and (if necessary) take enforcement action in relation to AI systems. Its specialist DaTA unit works across CMA cases to support teams technically, including understanding AI and its implications for consumers and competition in real-world settings. Using information gathering powers in the exercise of its various statutory functions, the CMA can request information including data and code of a firm’s technical systems to understand how those systems operate and assess them for relevant harms.

The CMA continued its work on AI through its foundation models programme, including the publication of an initial report in September 2023 and an update paper and technical report in April 2024.  The CMA has outlined the following principles to guide the market to positive outcomes: access, diversity, choice, fair dealing, transparency and accountability. When firms across the economy are undertaking assurance, alignment with these principles may mitigate risks to competition and consumer protection arising from foundation models. However, firms should consider the principles alongside their obligations under existing consumer and competition law.

AI is also relevant to the new digital markets competition regime set out in the Digital Markets, Competition and Consumers Act 2024. These new powers are intended to enable the CMA to quickly and flexibly respond to rapid developments in digital markets. These new powers came into force in January 2025.

The Act includes new powers for the CMA directly to enforce consumer protection law against infringing firms and envisages that significant penalties may be imposed for non-compliance. These penalties may be up to 10% of a firm’s worldwide turnover.

Anyone designing or implementing AI assurance must take account of consumer law. Consumer law is relevant throughout the AI supply chain, both for the businesses developing AI systems for use by other businesses in engagement with consumers and for the consumer facing business itself.  For example, a firm is likely to engage in an unfair commercial practice if it develops or deploys an AI tool which ultimately provides false information to consumers or in any way deceives or is likely to deceive the average consumer, and this causes or is likely to cause the average consumer to take a different decision about a product. An example of this could be firms using AI systems that provide customers with inaccurate price information. The law applies whether the deployer has built the AI system themselves or is using a system developed by a third party. Consumer law also places a general duty on firms to exercise ‘professional diligence’ towards consumers, which is the standard of skill and care which a trader may reasonably be expected to exercise towards consumers commensurate with honest market practice and good faith. It is likely to require firms which develop or deploy or otherwise use AI tools to take such appropriate steps as are necessary in the circumstances to prevent harm to consumers’ economic interests.

FCA: Ensuring fair consumer outcomes for financial services

The FCA has various regulatory requirements that are relevant to the use of AI. It regulates financial services firms and financial markets in the UK, and many of these firms are using, or considering how to use AI safely and responsibly. For example, some firms make decisions on whether to offer or decline a consumer’s application for credit with the support of AI systems or use AI for algorithmic trading. Firms may wish to use assurance techniques to help support regulatory compliance, although, the onus is ultimately on the firm to ensure compliance.

The FCA AI Update sets out the FCA’s regulatory approach to AI. It highlights some of the key elements of its regulatory framework, in relation to the former government’s AI principles. The FCA’s rules, regulations and core principles do not usually mandate or prohibit specific technologies. Rather, its regulatory approach is principles based, and outcomes focused. This means that the FCA holds firms accountable for fair consumer outcomes irrespective of the technologies and processes they deploy. The FCA does so in line with its remit and statutory objectives – to protect consumers, to protect the integrity of the UK financial system and to promote effective competition in the interests of consumers.

To do this, the FCA works to identify and mitigate risks to those objectives, including from regulated firms’ reliance on different technologies, and the harms these could potentially create for consumers and financial markets.

The FCA’s approach to consumer protection is particularly relevant to the use of AI systems by firms. It is based on a combination of the FCA’s Principles for Businesses, other guidance and rules (both high-level and more detailed), and the Consumer Duty which requires firms to play a more proactive role in delivering good outcomes for retail customers. These expectations, and others, could be relevant to anyone designing and implementing AI Assurance in these areas.

Firms are also subject to requirements relating to risk management, systems and controls (see the “Safety, Security and Robustness” section of the AI Update for further details). These comprise a range of high-level principles, with more specific requirements, under the Senior Management Arrangements, Systems and Controls (SYSC) sourcebook, such as the general organisational requirements under SYSC 4. Firms may wish to use assurance techniques to support effective risk management, systems and controls, taking account of these requirements.

Next steps

The DRCF will continue to publish our learnings on assessments of AI systems and will continue to be committed to fostering relationships and learning from other organisations interested in AI assurance, including DSIT, other regulators and civil society groups.

The DRCF has also been considering the role of technical standards and frameworks in AI assurance and hosted a webinar in December 2024 to raise awareness of the broad range of tools that support the overall assurance ecosystem. This session covered the current landscape of AI standards and regulatory perspectives on the topic. Please see this blog covering the key takeaways. If you would like to view the recording of the webinar, please email drcf@ofcom.org.uk to request a copy. 

Back to top