Generative AI broadly refers to machine learning models that can create new content in response to a user prompt. These tools – which include the likes of ChatGPT and Midjourney – are typically trained on large volumes of data, and can be used to produce text, images, audio, video and code.
Understanding the implications of Generative AI for citizens and consumers is a priority for the DRCF this year.
In June, we held a workshop to identify common risks, discuss promising interventions, and consider opportunities for joint research and cross-regulator initiatives. Present at the workshop were colleagues from across the four regulator members, including representatives from our policy, technical and economic teams.
Key workshop themes
Generative AI will have implications for all the sectors and remits we regulate
Many of the risks associated with Generative AI – as defined above – are similar to those posed by longstanding and less sophisticated forms of AI. However, the power of Generative AI tools may mean these risks are amplified and made more severe.
Each of the four digital regulators has reason to be concerned about the misuse of this technology. As the incoming online safety regulator, Ofcom is closely monitoring the potential for these tools to be used to generate illegal and harmful content, such as synthetic CSEA and terror material. Ofcom is also mindful of how Generative AI could impact the quality of news and broadcast content, as well as the risks it poses to telecoms and network security.
Similar concerns are shared by the other digital regulators. The ICO has set out a series of data protection questions for developers to consider as they build and deploy these tools. The CMA, meanwhile, has kicked off a review to examine whether the market for Generative AI and other foundation models is sufficiently competitive, and the extent to which that market has high barriers to entry and expansion, such as those relating to requirements for computing power and data.
The FCA, likewise, is considering the risks posed by Generative AI and AI holistically to the financial services industry, such as that to consumer protection, competition, market integrity, governance and operational resilience. Building on the AI Discussion Paper it published last year, the FCA is currently analysing the responses alongside the recent developments in AI in developing its next steps. The FCA’s CEO, Nikhil Rathi, recently delivered a speech on the FCA’s emerging regulatory approach to big tech and artificial intelligence.
Many of the risks posed by Generative AI are already captured by existing UK laws and regulations
A common misconception is that the uses of Generative AI are unregulated in the UK. Yet this is not the case. Indeed, each DRCF regulator is already empowered to address many of the risks this technology poses. The same is true for many other regulators outside the DRCF.
The ICO's data protection rules will apply just as much to the development and deployment of Generative AI models as they do to conventional AI systems, to the extent those involve the processing of personal data. Similarly, the consumer protection regulation enforced by the CMA, Ofcom and the FCA bars firms from presenting false or misleading information to consumers, regardless of whether that information is produced by Generative AI, human authors or another source.
Regarding operational resilience requirements in financial services, banks and other regulated firms are expected to meet them irrespective of the technology they use.
It is important not to lose sight of the benefits posed by Generative AI – including for regulators
Notwithstanding the risks laid out above, it is also clear that Generative AI could create tremendous value for our economy and society. At a time when incomes are strained during a cost-of-living crisis, and when public services are still rebounding from a once in a generation pandemic, every regulator needs to make a concerted effort to support the responsible adoption of this technology.
Indeed, we are already starting to see the benefits of Generative AI for citizens and consumers – from improving drug development to making education more engaging. In our own sectors and remits, too, there are opportunities to be had. In the telecoms industry, which Ofcom regulates, Generative AI is being used to manage power distribution, spot network outages, and both detect and defend against security anomalies and fraudulent behaviour. In financial services, Generative AI could be used to create synthetic training datasets to enhance the accuracy of models that identify financial crime.
Regulators could themselves make use of Generative AI capabilities, helping to enhance our productivity and reduce costs for the taxpayer. Deployed responsibly, Generative AI tools could for example be used for the analysis and summarisation of large sets of documents, for signposting members of the public or regulated firms to the right guidance, or for answering specific customer queries more promptly.
Each of these options requires careful consideration and would likely require us to run and host our own models privately. But it is important regulators are alive to the possibilities of innovating with Generative AI.
More voices need to be heard in the debates on Generative AI
In recent months, the attention of the media, policymakers and the public has focused on the views of those who have created and launched Generative AI tools, including large US-based technology firms. This is understandable, given their insider perspective on the power and potential of this technology.
Yet it is critical that we now begin to hear from other stakeholders. This includes citizens and consumers who are most likely to be affected by these new tools, civil society groups who represent the interests of particularly vulnerable groups, academics and researchers who can provide an independent analysis of the technology’s effects, and the regulated industries who will be using the technology.
As some of the largest digital regulators, it is incumbent on us to seek out their views – and indeed we have already begun to. Ofcom, for example, recently ran a short survey to understand UK internet users’ usage and perceptions of Generative AI, finding that 58 percent agreed with the statement “I am worried about the future impact of Generative AI on society”, versus 9 percent who disagreed. Each DRCF regulator is also directly engaging with their regulated industries to hear how they are making use of this technology.
Where next for the DRCF’s work on Generative AI?
Generative AI will continue to evolve over the coming months and years, becoming more powerful and enabling new types of products and services that we have yet to encounter. It is important that regulators can respond to these developments, protecting citizens and consumers while also creating the space for responsible innovation.
The DRCF can help us to step up to that challenge, facilitating cross-regulator collaboration that allows us to achieve more together than we could in isolation.
The following questions are just some those that could benefit from joint analysis:
- How is this technology likely to change in the coming months? What new capabilities can we expect, and where are the breakthroughs likely to land?
- How are consumers and citizens engaging with Generative AI tools? How do they make use of them in their daily lives, and how does that vary across demographic groups?
- How are our regulated services making use of Generative AI? How do they access those capabilities (e.g., by using open-source models or proprietary systems)?
- To what extent are these regulated services aware of how existing regulation applies to Generative AI? Where could this be clearer?
- How do we work with the government to address any potential gaps in the regulation of Generative AI, in line with the proposals set out in the government’s AI White Paper?
- How could the way we regulate Generative AI impact competition, innovation, consumer wellbeing and people’s rights and freedoms?
- How can regulators – and other public sector organisations – integrate Generative AI within our own operations to improve consumer and industry experiences?
What do you think?
Do you have views on where the DRCF should be focusing its efforts on Generative AI? Get in touch at drcf@ofcom.org.uk to share your opinions.
About the DRCF
The DRCF is a collaboration between the UK’s four digital regulators (ICO, CMA, Ofcom and FCA), which seeks to promote coherence on digital regulation for the benefit of people and businesses online.
Within the DRCF, project teams explore key areas of common interest across the regulators, from Algorithmic Processing to Enabling Innovation and Horizon Scanning on Emerging Technologies.
In April, the DRCF published its workplan for 2023/24, setting out priority areas of interest. Projects include piloting a multi-agency advice service to help innovations in digital markets to develop services responsibly; conducting research into digital identity and business models in virtualised environments, such as the metaverse; and engaging on the implementation of the UK Government’s proposed AI framework.