Ensuring Trustworthy AI: the Emerging AI Assurance Market

16 July 2024

Since its inception, the DRCF AI project has been considering the impact of AI systems on society and the wider economy. DRCF member regulators have powers to investigate and hold firms accountable for the way that their AI systems operate and see this emerging market as an important element of the overall AI assurance landscape.

An increasing number of UK companies are looking for ways to review the AI systems they use. This growth is being driven by the increased use of AI systems across all sectors of the economy, and the need for these systems to meet good governance standards. To meet this demand, there has been a growth in new providers and services specialising in AI assurance (which might also be known as AI auditing and assessment). AI assurance is a process that helps build confidence in AI systems by measuring and evaluating reliable, standardised, and accessible evidence about the capabilities of these systems. It is an increasingly important part of broader AI governance and a key pillar of support for organisations to operationalise AI principles and policy.

As part of our 2023/24 Workplan, DRCF member regulators undertook a landscape review and interviewed providers of AI assurance services, aiming to enhance our understanding of the assurance sector in the UK and what further value we could provide. This included understanding which companies are currently providing these services, the nature of services being provided, and why and how clients are using them. We were also interested in finding out how these firms are marketing their services, and how firms are evidencing the effectiveness, trustworthiness, and credibility of the services they are offering.

Below we outline an overview of the assurance market and the key takeaways from our interviews regarding the development of this sector and its relationship with our regulatory remits.

NB. This is a landscape review with findings from assurance provider interviews, this is not DRCF member regulator policy.

Overview of the Assurance Market

The assurance market is a dynamic and growing sector, with an increasing number of providers offering a range of services. A variety of AI assurance services are available, with approaches based on governance, empirical or technical assessments.  They focus on issues such as system accuracy, understanding risks, or considering ethical implications such as bias and fairness.[i]

Assurance outputs range from light-touch advisory support to a full AI lifecycle check. It can depend on the use-case, but most providers aim to provide the end-to-end evaluation service, known as a ‘lifecycle approach’ to an audit. The purpose of this approach is to scrutinise and challenge assumptions and decision-making at every stage of the AI model development process, uncovering potential risk factors throughout the AI supply chain.

The level of detail of a providers’ service can depend on their size and resourcing, with larger well-resourced providers more likely to offer the full technical audit with bespoke recommendations, while smaller providers with less experience often provide a more hands-off governance audit, rather than a lifecycle approach. Most providers focus on the EU and UK, with only a few larger providers having a global reach. Large providers also generally have a broad horizontal approach which spans sectors.

Once weaknesses or gaps are identified during assurance assessments, many providers deal with them by offering mitigation solutions (which the organisation with the model is ultimately responsible for implementing). The level of involvement depends on the level of risk identified. Larger providers generally take an involved role in dealing with risks and gaps identified during their audits.

Assurance providers told us:

  1. Well-defined, universally accepted standards are important

    Most providers highlighted the importance of well-defined, universally accepted technical standards and frameworks, as well as the ability to develop accreditations for assurance providers against these standards.[ii] These were noted as particularly appropriate for assuring AI systems for accuracy, risk management, data quality and fairness of AI-based decision making.

    Some providers suggested that use case specific standards would be more important than broad AI frameworks, due to unique risk environments. We found that the EU AI Act and the NIST AI Framework have been utilised by many assurance providers, however sector specific standards are only emerging in high-risk use cases such as biometrics and healthcare. Generally, providers use these existing standards and regulation to inform their approach and contextualise these frameworks to suit the service they are providing.

    Many providers highlighted the interplay between standards, regulation, and enforcement. For example, standards without enforceable regulations can result in assurance becoming tokenistic (or checkbox exercises), while primary legislation without standardisation would present enforcement challenges because there needs to be a baseline for enforcement. A common theme throughout our research was the importance of clear, strong regulation and universally accepted standards.

    Providers also noted that accreditation and standards could support the professionalisation of the sector. While standards could set expectations, accreditations could support the growth of the wider ecosystem through education, qualifications, and training programmes to address any skills gaps (further detail in finding 5).

    Some experts also highlighted the importance of building upon existing accreditation institutions and investing in capacity building, rather than creating new institutions specifically for AI assurance.

  2. Assurance taxonomies and definitions need to be clarified

    Providers noted that there is no universally agreed definition of what services are considered AI assurance. The nature of the assurance services varies significantly between providers, with a combination of governance, technical and regulatory compliance assurance methods being offered. They highlighted that this could create downstream challenges when establishing a broadly agreed taxonomy across the ecosystem for assurance, accreditation, and standards. Providers also noted challenges in distinguishing broader AI assurance practices from other forms of auditing which are generally tied to a specific compliance or regulatory requirement.

    As such, providers said that the existing definitions and assurance taxonomies could be reviewed to assess whether more distinct and nuanced definitions should be developed. They noted that this could help provide more clarity on the services provided for both assurance providers and the wider ecosystem.

  3. Regulation has an important role in driving demand for assurance services

    There was a widespread view among the providers that regulators could play a helpful role in driving demand for assurance services, including developing standards within their respective regulatory domains, as well as raising awareness about the importance of assurance more broadly. They noted that many firms might not be aware of the importance of effective external validation and assurance for their models. They noted that regulators could play an important role in making clear that firms take assurance (internal and external) seriously across all sectors and go beyond minimum requirements for AI systems.

    While many providers cite reputational capital as the number one reason that their customers seek assurance services, they said there is a noticeable a shift toward international regulatory drivers (such as the EU AI Act and the New York State AI bias law) playing a more significant role in the demand for assurance services. Providers specifically noted that they were prioritising assurance of international regulations, such as the EU AI Act. Providers noted that further work was required before they would be in position to offer assurance services in relation to domestic UK regulations, such as Ofcom’s Online Safety regime or the CMA’s Digital Markets regime.

  4. Gaining access to clients’ systems and data can be challenging

    Providers emphasised the challenges in accessing clients’ systems and data, especially when dealing with third-party developers that provide AI systems to their clients. They noted that this could be because customers or third-party providers might not be aware of model risks (existence of risks and/or level of risks) and that some might prioritise commercial delivery over system review (i.e. assurance might delay bringing products to the market).

    Providers have adopted various approaches to managing this challenge. This includes secondments directly into clients’ teams, techniques for remote access to systems, or ensuring that providers can access clients’ systems by embedding it in procurement contracts with third-party developers.

  5. There are challenges in finding staff with the range of skills needed 

Many providers mentioned challenges in finding all the range of skills needed to assure AI systems across areas such as cyber security, technical performance, machine learning, and other subject matter expertise. They noted that this gap could be addressed by future initiatives to develop a training ecosystem for the sector.

Some providers also noted that there might be a link between providers’ size and the identified skills gap. For example, while smaller providers were perceived to be more agile, they might lack the necessary human resources and skills to provide bespoke, quality services. By contrast, it was noted that larger providers might lack expertise in highly specialised niches such as facial recognition technologies or algorithms designed to automate recruitment processes.

Another recurrent theme was the lack of AI and data literacy among industry actors and consequently, a lack of awareness of the value that external assurance would have. In response to this, we found that some larger providers support their clients to develop their AI and data fluency to increase their awareness of AI risk and allow them to make informed judgements about their AI models.

Next steps

As set out in our 2024/25 Workplan, the DRCF will continue to undertake research into the third-party AI assurance market with the aim of informing industry on how they can make best use of services offered by this growing sector. DRCF member regulators will also continue to share technical knowledge and develop capabilities to be able to effectively assess AI systems.

We will continue engagement with the standards development community, including the British Standards Institute (BSI) and the AI Standards Hub. We will also continue to engage with the UK Government's Responsible Technology Adoption Unit as it undertakes further research into the sector and the potential economic benefits it can deliver.

Overall, DRCF member regulators recognise their important role in shaping the development of the AI assurance market and raising awareness of their sector-specific legislation for future assurance services.


[i] This has been a key interest for the DRCF following our 2021 report on ‘Auditing algorithms: the existing landscape, role of regulators and future outlook’ and our 2022-23 workshops on how each regulator assesses AI systems within their remits.

[ii] Some well-known AI technical standards include the International Standard’s Organisation’s ISO/IEC 42001 on  Artificial intelligence Management systems, and the IEEE’s Autonomous Intelligent Systems Standards, and frameworks include the US National Institute of Standards and Technology’s AI Risk Management Framework.

Back to top