Governing AI Foundation Models

This report provides an overview of what foundation models are, how they are proposed to be regulated under the UK Government's current proposals, and recommendations for what Labour should do differently - nationally and internationally.

7 Jun
2023
|
8
min read
Executive Summary

This report provides a high-level overview of what foundation models are, how they underpin different AI applications, and of the current regulatory position under the UK Government’s White Paper on AI regulation. It also sets out recommendations for how Labour can improve on the current regulatory position and address the societal risks that foundation models present, while continuing to ensure that the UK benefits from the technological advances of AI applications.

A foundation model is a type of AI model that is trained on very large quantities of data and is adaptable for use across a very wide range of tasks.

The Government’s AI White Paper proposes to regulate AI through a focus on sector-specific applications of AI. It does not currently include any specific regulation either of foundation models themselves or of the companies that are developing foundation models.

There are four principal reasons why additional provisions covering foundation models are needed: (1) the cross-sectoral applications of foundation models; (2) the pace of technological advance; (3) the risks from foundation models; and (4) the need to fairly allocate regulatory responsibility through the AI value chain.

To effectively address the challenges and mitigate the potential risks posed by foundation models, we recommend that the Labour Party commits to an approach to AI governance that does the following:

  • Extends the UK’s AI regulatory regime to include foundation models.
  • Reforms government structures to appropriately enforce this regime and monitor foundation models.
  • Pushes for international coordination on AI - including strong global standards on monitoring of foundation models and research for developing AI models that are robust and safe.

1. What is a ‘foundation model’?

A foundation model is a type of AI model that is trained on very large quantities of data and is adaptable for use across a very wide range of tasks. The current state of the art foundation models are generally ‘large language models’ (‘LLMs’), which are trained to understand and generate human language. Some cutting-edge foundation models are already ‘multimodal’, capable of understanding and generating images, audio, robotic actions etc.

Foundation models form the technical base, on top of which companies build more specific AI applications and tools. For example, OpenAI developed the foundation model ‘GPT-3.5’, on top of which the company built its ‘ChatGPT’ chatbot. Other companies have also built AI applications on the basis of OpenAI’s foundation models, paying OpenAI to licence the foundation model for use in their products. For example, the company Harvey has built an AI application for automating legal tasks, based on OpenAI’s foundation model. This means that security or safety flaws in the underlying model will be replicated in ‘downstream’ applications built upon them.

Building cutting-edge foundation models is very expensive (hundreds of millions of dollars in hardware costs, in addition to the costs of associated infrastructure and salaries), and there are currently only a small number of companies able to do so. Of these, all are based in the US except for Google DeepMind, which is based jointly between the US and UK (formerly independent DeepMind was sold to Google in 2014).

These companies have adopted a variety of approaches with respect to how developers are to be permitted to build applications on top of the foundation models. Most companies charge for API access or license their model for a fee - which means they can monitor and control misuse of their platform. On top of this, there are also a small number of entities seeking to develop ‘open source’ foundation models, based not out of corporate tech companies but diffuse communities of internet-based developers. Open source foundation models may be cheaper to use, but their misuse cannot be easily monitored or prevented.

2. What does the Government’s White Paper say on foundation models?

The Government’s AI White Paper, which was released on 29 March 2023, proposes to regulate AI through a focus on sector-specific applications of AI (e.g. in healthcare, finance, education). Existing sectoral regulators will be given the responsibility of enforcing a set of five principles on companies that are deploying AI tools and applications within their spheres of competence.

However, the White Paper does not currently include any specific regulation either of foundation models themselves or of the companies that are developing foundation models. This approach to foundation models appears to have been taken on the basis of the following two statements in the White Paper:

  • Allocating too much legal responsibility for AI applications to foundation model developers would hamper innovation (para 81); and
  • The regulatory framework should create the environment to “maximise the transformative potential” of foundation models. (para 92).

Foundation models are covered in the White Paper only indirectly, through the remit of a ‘central risk function’. This central risk function, proposed to be housed within Government, is to include:

  • A monitoring, assessment and feedback function which will provide advice to Ministers on issues that may need to be addressed to improve the regime.
  • A cross-sectoral risk assessment function, which will work with regulators to clarify responsibilities in relation to new risks or areas of contested responsibility and support join-up between regulators on AI-related risks that cut across remits.
  • A horizon scanning function to monitor emerging trends and opportunities in AI development to ensure that the framework can respond to them effectively.

It should be noted that the White Paper followed the Government’s announcement of the establishment of a Foundation Models Taskforce, with £100m of initial funding, which will look into the possibility of developing ‘sovereign’ AI capabilities.

3. Limitations of the Government’s White Paper

While the White Paper’s sector-specific approach makes sense for regulating individual AI applications, the absence of provisions covering foundation models is a significant omission. Without provisions covering foundation models, the proposed regulatory regime does not give Government the ability to appropriately address the full impact AI could have on the UK economy and society.

There are four principal reasons why additional provisions covering foundation models are needed:

(i) Cross-sectoral application of foundation models. By definition, foundation models have potential application across multiple different sectors of the economy. This means that foundation models bring about a number of issues that cannot be comprehensively addressed solely through sector-specific regulation of applications. Existing regulators focused on specific areas of the economy are not the appropriate entities to address this cross-sectoral application, and are unlikely to have the appropriate expertise to do so.

(ii) Pace of technological advance. Foundation models are currently undergoing a period of extremely rapid advancement. Should the Government focus its regulatory efforts solely on AI applications rather than foundation models themselves, it may miss significant technological advances that could have broad application across the economy and society.

(iii) Risks from foundation models. The potential risks from advances in AI have been recently highlighted by a number of leading AI scientists, AI lab CEOs and senior global statesmen (such as Mary Robinson and Ban Ki-moon). The most significant of these risks flow not from specific applications of AI but instead from foundation models themselves. These risks include bias and discrimination imported into the models from the data they are trained on, the potential for AI models to be used to spread misinformation and other harmful content, and the risk of AI models developing goals beyond those specified by their developers - including the objective of increasing their capabilities and resisting shut-down by human overseers.

(iv) Letting foundation model developers off the hook: fairly allocating responsibility. If foundation model developers (largely based in the US) are not made responsible for the safety and security of these underlying models, the regulatory burden will instead fall on the many more downstream developers, often British SMEs and start-ups. Foundation model developers are best placed to bear this regulatory burden, being large, well-capitalised and with the required expertise.

4. Proposals for regulating foundation models

To effectively address the challenges and mitigate the potential risks posed by foundation models, while continuing to ensure that the UK benefits from the technological advances of AI applications, we recommend that the Labour Party commits to an approach to AI governance that does the following:

  1. Extend the regulatory regime to include foundation models

The UK’s AI regulation should set strict standards for the use of AI systems in critical settings like healthcare or defence *and* cover ‘foundation models’. This could include requirements for foundation models to undergo evaluation and third-party auditing before they are released, and restrictions on certain dangerous capabilities.

In addition to this, developing new ‘frontier’ AI systems – bigger and more capable than any foundation models yet developed – should be regulated like risky biological or nuclear experiments, with a licence and third-party safety evaluation required before new training runs can commence, and requirements to report certain incidents.

  1. Reform to government structures to appropriately enforce this regime and monitor foundation models.

This could include the establishment of a new regulatory body to oversee foundation models and an new expert advisory council, to advise the Government on risks posed by foundation models. Additional funding could be provided to the new regulatory body (or Office for AI in the first instance), to allow it to appropriately enforce the new regulatory regime.

  1. International coordination on AI.

The US, EU and UK governments are currently establishing AI regulation within their jurisdictions. These governments should also work together to set strong global standards for foundation model safety. This should be extended to the G7 level so that there is international cooperation on the evaluating and monitoring of foundation models. The UK Government is expected to offer to host a summit on international governance of AI this Autumn.

These allied governments should then also explore confidence-building measures, information exchanges and ultimately an international agreement with China and Russia.

On top of this, the UK should champion international cooperation on research for developing AI models that are robust and safe. This could involve the establishment of a new international institution (such as a ‘CERN for AI safety’), which could be based in the UK.

About Labour for the Long Term

Labour for the Long Term aims to put the future at the heart of our policy-making. We want to help Labour develop policies to ensure a resilient future: from reducing the risk of deadly pandemics and fighting climate change, to preparing for emerging technologies and rising great power conflict.

The author James Baker is Executive Director of Labour for the Long Term. If you have further questions about the contents of this report, would like to discuss how the Labour Party could approach AI or would like to get in touch with the author, please email james [at] labourlongterm.org.

Never miss the latest policy research and guidance

Get long-term, evidence-based policy recommendations and guidance straight to your inbox.