AI Transparency: DRCF Perspectives

21 October 2024

AI Transparency: DRCF Perspectives

 

In August 2024, the DRCF held a workshop on AI transparency to compare how it applies within each regulator’s remit. Transparency has been one of DRCF’s areas of focus since 2022.[1]  The discussion also explored good practice in how regulated organisations implement transparency. The workshop included DRCF member regulators as well as Ofgem, the Bank of England and the Equality and Human Rights Commission.

 

This article summarises the findings from the workshop. It discusses why AI transparency is important and the key considerations for participating regulators, as well as compiling useful information from each regulator on existing guidance and their next steps related to AI transparency.

 

Why is AI transparency important?

 

By “AI transparency” we mean informing people who use a service (users) and others affected by a service whether AI is being used in that service and if so, how. Transparency is crucial to ensure users and others can trust AI systems. With more information and awareness, users and those whose data has been collected are empowered to make informed decisions and better understand the related consequences of AI systems. Increased transparency should also lead to increased market competition, by equipping users with information to assess which product or service is right for them.


As well as engendering trust and enabling informed decisions, transparency in AI allows users to protect themselves from harm and exercise their rights. If an organisation does not disclose its use of AI, those impacted may never be able to assess whether they have been treated fairly or subject to harm, or be well equipped to seek redress or contest AI decisions.

 

The DRCF workshop highlighted the importance of transparency in enabling other AI principles, including fairness and accountability[2] and that AI transparency facilitates accountability by enabling organisations to trace back AI decisions and outcomes to specific data inputs. Similarly, the CMA’s AI Foundation Model (FM) also states that principles of transparency and accountability are linked. The CMA noted that if the two principles are implemented correctly, they can reinforce each other. For example, sufficient transparency can help ensure FM developers are held to account for the outputs provided to users.

 

Overall, AI transparency helps users and others trust that the systems and processes they are using are safe, and helps regulators ensure people are protected in parallel to supporting business growth.  

 

What are the key considerations for regulators?

 

Transparency in AI is relevant to all DRCF member regulator remits.

 

For the ICO, transparency is a legal requirement[3] that applies across the AI lifecycle when personal data is processed, from data collection to development and deployment of AI systems. For the ICO, transparent processing entails organisations being clear, open and honest with users from the start about how and why they are processing people’s personal data. In the context of AI, organisations must be transparent about how personal data is processed in a model.​ The ICO has produced extensive guidance on AI explainability.[4] At the workshop, the ICO shared transparency case studies as part of its upcoming audit report of AI tools used in recruitment, including machine learning used for sourcing, screening and selection. The ICO detailed specific recommendations for each case study to improve transparency practices. This includes suggesting that providers of AI tools test user understanding of the privacy information they receive and do more to ensure the information is clear and comprehensible, avoiding overly complex explanations or technical legal language. One suggestion is to supplement text-based privacy information with pop-up messages or bite-sized information. The report will be published in early November.

 

Transparency and consumer protection are intrinsically linked. Consumer law as enforced by the CMA has the fundamentals of transparency baked in. For instance, the Consumer Protection from Unfair Trading Regulations 2008 include a general prohibition against unfair commercial practices and specific prohibitions against misleading actions and misleading omissions. From the CMA’s perspective, consumers should understand the consequences of the choices they are making. As such, organisations should consider whether they are providing the right information to consumers, which may include if they are interacting with an AI-powered service and, if so, what limitations the service may have.

 

For the FCA, there are high-level AI transparency requirements in their expectations of consumer protection. These include both a cross-cutting obligation under the Consumer Duty to act in good faith, characterised by honesty, fair and open dealing with consumers; and related rules on meeting the information needs of retail customers, equipping them to make decisions that are effective, timely and properly informed.

 

Ofcom has powers to issue transparency notices to certain providers of online services that are regulated under the Online Safety Act. The notices set out information, determined by Ofcom, which the provider is required to publish in a transparency report. This could cover information about the design and operation of algorithms on which some AI systems are built, where Ofcom deem it relevant and appropriate to require such information and request it accordingly. in line with the approach set out in Ofcom’s transparency reporting guidance. Ofcom has consulted on draft statutory transparency reporting guidance, including on the process by which they will decide what information online service providers must include in their transparency reports.

 

In addition to AI transparency to users, the Online Safety Act 2023 gives Ofcom a range of powers to require and obtain the information it needs for the purposes of exercising, or deciding whether to exercise, its online safety duties and functions. This includes information gathering powers, which enable it to require information about services to be shared directly with the regulator. Ofcom has consulted on draft guidance to help services, and other stakeholders, to understand when and how they might use these powers.

 

The CMA has established a transparency principle in their Foundation Models (FM) programme. The principle states that consumers and businesses must have the right information on the risks and limitations of FMs and stresses the importance of transparency throughout the FM AI lifecycle. It also states that deployers should clearly communicate necessary information to users of FM-based services to allow them to make informed choices, as well as making it clear when an FM-based service is being used. Further, the CMA explained through its principle that developers of FMs have a responsibility to provide the right information to deployers, so that deployers can more accurately determine the risks of the model and manage their responsibility to consumers.

 

Overall, there are multiple requirements for AI transparency in both DRCF and other regulatory remits, and all the member regulators are enforcing the principle through their existing regulatory regimes.

 

The DRCF will continue to share knowledge and experience on upholding the transparency principle and may consider further work on transparency as part of our 2025/26 workplan. Our Call for Input on the next workplan closes on 8 November.

 

Upcoming work

 

Further information regarding AI transparency and upcoming work from DRCF member regulators and Ofgem:

 

Ofcom

·       Recently consulted on draft information guidance and transparency reporting guidance, covering the process Ofcom will adopt for deciding what providers must include in their transparency reports, and how information from those reports will be used to inform Ofcom’s own transparency report.  

 

ICO

·       Recently consulted on how aspects of data protection law should apply to the development and use of generative AI models.

·       ICO have guidance on AI and data protection, which includes how to ensure transparency in AI.

·       Published guidance with the Alan Turing Institute in 2022 on explaining decisions made with AI.

·       Upcoming audit report on AI in recruitment. To be published in November 2024.

 

CMA

·       CMA have started an Initial Foundation Model Review in May 2023, and published an Initial Report in September 2023, Summary Update Report and Technical Update Report in April 2024.

 

FCA

·       Set up an AI Programme Board to drive the internal and external safe use of AI. As part of this the FCA will develop a third edition of the AI survey (jointly with the Bank of England) and internal diagnostic work​.

 

Ofgem

·       Currently developing AI guidance which will include transparency alongside the other AI White Paper principles. Ofgem intends to consult on this guidance in December 2024.

 

Joint DRCF work

·       Joint statement between ICO and CMA on foundation models will be published by the end of 2024.

·       Overview of AI fairness workshop held last year can be seen here.

·       DRCF report on transparency in the procurement of algorithmic systems: Transparency in the procurement of algorithmic systems: Findings from our workshops | DRCF

 

 

 

 

Back to top