This post sets out Labour for the Long Term's submission to the Labour Party 2023 Policy Forum, responding to question four within the theme ‘A Green and Digital Future: Delivering Growth’ - What policies can help contribute to the four missions outlined in Labour’s industrial strategy?
This submission primarily outlines policies that can contribute to Mission Two of the Labour Industrial Strategy: Harnessing Data for Public Good.
The industrial strategy highlights that the UK is a world leader in AI ethics and safety research and that investment in this research is vital for nurturing a healthy AI sector that serves the public interest. It also recognises the importance of levelling the playing field for smaller firms, creating more competitive markets and enabling new services. However, right now cutting-edge AI models cost tens of millions of dollars to train and require significant infrastructure to deploy. This is out of the reach of most academics, civil society and SMEs.
Government could redress the balance between big tech companies and the rest of society and maintain the UK’s leading position in AI ethics and safety research, by creating a ‘compute fund’ to provide free or subsidised computation resources to researchers and civil society organisations working on socially beneficial AI applications or AI auditing, safety and security.
This compute fund would help rebalance power between private tech companies and workers by allowing unions, civil society organisations and academics, who might otherwise lack the resources, to scrutinise, audit and hold accountable commercial AI systems. It would also provide an infrastructure for SMEs, cooperatives and unions to build competing AI tools that provide increased productivity and better services while retaining autonomy and dignity at work.
Safe and Responsible AI
The industry strategy also aims to ensure the UK is the best place in the world for safe and responsible AI, by building the world’s most competent regulatory environment for AI, and supporting a thriving and effective AI assurance ecosystem.
To support an effective AI assurance ecosystem, there first need to be incentives to assess the societal risks from AI systems. Labour could legislate to require impact assessments, as is already happening with fundamental rights impact assessments in drafts of the EU’s AI Act and Canada’s mandated Algorithmic Impact Assessment for public sector agencies. A Labour government could institute assurance requirements as part of any process to share public data or in procurement requirements in the public sector. It could also provide regulatory advice to private companies around best practice in AI or what to look for when procuring AI systems. Finally it could even sponsor prizes or challenges around risk assessment methods or trials.
There will also need to be regulatory capacity to support risk assessment, and to deliver monitoring and investigation functions that help ensure the mitigation of risks over time. In the UK, some regulators have had longer established capacity for this, such as the CMA, and others such as Ofcom have recently been expanding to take on these responsibilities. However, some regulators which may have expertise well-suited to considering societal risks, such as the Equality and Human Rights Commission (EHRC), are little resourced for tackling questions of AI. The EHRC was set up by the last Labour Government to tackle discrimination, promote equality and protect human rights, and Labour should empower it and provide it with greater resources to investigate risks and harms from AI systems.
The UK is a major services exporter, and has a strong audit sector and existing expertise in AI ethics. Because of this, AI assurance can be a major growth sector for the UK - if there are clear standards, stable regulation, and public support for existing assurance and safety initiatives.