Better Jobs and Better Work

How can we help all workers and businesses gain from new technologies and new ways of working? How can we ensure that technologies don’t become more intrusive and undermine workers’ rights?

7 Jul
2022
|
9
min read

This briefing was initially prepared as a submission to Labour’s National Policy Forum.

Section 1. Risks to workers from technology

This submission focusses on mitigating the risks to workers from artificial intelligence (AI), although many of the risks and mitigations will apply to digital technologies more broadly.

We are already seeing algorithmic systems being used to monitor and manage workers. The gig economy and the logistics industry in particular have become testing grounds for algorithmic management, from facial recognition used to verify Uber drivers to Amazon's delivery route algorithm directing drivers to meet often punishing quotas.

Labour governments have always fought for workers rights and algorithmic technology is now at risk of undermining these hard won gains. There is a particular risk that companies will sneak these practices under the radar or justify them as necessary cost-cutting or efficiency measures after the pandemic and during the cost of living crisis. In the longer-term, AI is likely to be a transformational, general-purpose technology, automating many tasks across a diverse range of industries, from self-driving delivery vehicles to AI-powered image generation. We can more systematically understand the risks from AI to workers by separating those risks into three categories:

  1. Misuse risks result from the unethical use of AI. For example, automated facial recognition and object identification systems being used to invasively monitor workers.
  2. Accident risks result from unexpected behaviour or faults in an AI system, for example, a picking robot malfunctioning and trapping a warehouse worker. The more AI is integrated into safety-critical systems such as vehicles and energy systems, the higher the stakes are for these accident risks.
  3. Structural risks result from the increasing use of AI, especially over the longer term, to change political, social and economic structures and incentives. Widespread use of AI systems could exacerbate existing inequalities at work by locking in patterns of historical discrimination, rapidly displacing workers leading to frictional and possibly structural unemployment, and dramatically concentrating economic power in the hands of a few companies.

The Labour Party was founded as a reaction to the social problems caused by previous waves of technological transformation. We know that these transitions can create the conditions for a better society for all and that this won’t happen by accident, but by Labour governments ensuring these technologies serve the interests of society and protecting workers through the transitions, which can bring significant transitional and structural unemployment. In light of the risks outlined above, we suggest extra policies are needed (1) empower society to scrutinise tech companies and (2) proactively manage AI risks. This is further expanded on below.

Empowering society to scrutinise claims of tech companies

In order to address the risks to workers from the development and deployment of AI systems, the next Labour government needs to have a better understanding of what the risks are and empower unions, civil society and academia to scrutinise the claims of big tech companies.

To do this a Labour government should: 

  • Invest in greater internal government capacity to assess progress in AI, its applications and impacts on society, either by a new body or by substantially increasing the scope and funding for existing initiatives, such as the Centre for Data Ethics and Innovation’s existing AI-monitoring capacity.
  • Create a ‘compute fund’ to provide free or subsidised computation resources to researchers and civil society organisations working on socially beneficial AI applications or AI auditing, safety and security.
  • Establish the Digital Markets Unit on a statutory footing and fund it adequately, to ensure fair competition and open markets.

Greater government capacity in assessing AI progress will reduce the information asymmetries between the government and the private sector, allowing the next Labour government to pre-empt the deployment of AI systems harmful to workers, rather than waiting for those risks to become reality and avoiding the hurried, imprecise, and uninformed policymaking of the current government.

Government should develop this capacity itself, rather than outsourcing it to consultants or relying on the private sector to provide the information. If not, private sector interests will exploit the lack of measurement and monitoring infrastructure to deploy AI technology that has negative externalities or fund entities to create measurement and monitoring schemes which align with their narrow commercial interests rather than broad, civic interests.

This compute fund would help rebalance power between private tech companies who currently develop AI systems and workers by allowing unions, civil society organisations and academics, who might otherwise lack the resources, to scrutinise, audit and hold accountable commercial AI systems. Leading AI models now cost tens of millions of dollars to train (see e.g here) - this is out of the reach of most academic, civil society and SME groups.  It would also provide an infrastructure for SMEs, cooperatives and unions to build alternative tools that allow workers to reap the benefits of automation while still retaining their autonomy and dignity at work.

Finally, the current government is equivocating on empowering the Digital Markets Unit. The next Labour Government should establish the Digital Markets Unit on a statutory footing and fund it adequately, to ensure fair competition and open markets.

Proactively mitigating risks of AI before and during deployment

A Labour government should introduce proactive duties to mitigate the risks to workers before and during deployment, rather than leaving it up to workers to fight for their rights after they have already suffered as a result of the AI systems. 

Examples of what can happen when software is assumed to be infallible and risks aren’t properly investigated abound, including the Dutch benefits scandal, the A-level results scandal and the Horizon scandal. The Horizon scandal led to 736 Post Office branch managers being given wrongful criminal convictions due to Horizon accounting software wrongly reporting that money was missing from their branches. This provides a warning to the future as postmasters were disbelieved when they reported bugs and were forced to fight their case in the courts, rather than the developers of the Horizon system being held accountable; one of the most widespread miscarriage of justice in UK history.

The Institute for the Future of Work’s proposal for an Accountability for Algorithms Act includes a number of promising interventions to give workers more control over developing AI, including: 

  • A right for workers to be consulted and involved in the development and application of algorithmic systems involving AI used at work. This can build on the existing requirements in Article 35 (9) of the UK GDPR to seek the views of data subjects (e.g. workers) or their representatives (e.g. unions) on intended data processing during a data protection impact assessment.
  • A duty on actors who are developing or deploying algorithmic systems, as well as other key actors across the design cycle and supply chain, to undertake an algorithmic impact assessment considering the structural, accidental and misuse risks of the algorithmic systems they develop and what steps can be taken to mitigate those risks.
  • Additionally, in the public sector, the Government should have dedicated ‘white hat’ hackers who stress test government systems by attempting to compromise and find faults in government software and hardware.

Correct Conservative neglect

Finally, the current Tory government is failing to stand up for workers, consumers and citizens in a number of areas surrounding AI. The following recommendations are therefore also important: 

  • The current government’s yet-to-be-published White Paper on AI regulation looks likely to be much weaker than the EU AI Act, putting our people at greater risk from high-risk AI systems. This should be strengthened by requiring more rigorous testing that the systems have been developed safely, securely and ethically.
  • Over the coming few years, AI standards will be set by the EU’s AI Act (and the CEN-CENELEC standardisation process following it) and the US’ NIST, coordinated through the EU-US Tech & Trade Council. The UK should seek to join this Council to shape these vitally important standards.

Section 2. Investment in applied biosecurity R&D

As part of Labour’s commitment to increase R&D spending to 3% of GDP and development of a broader industrial strategy, Labour should invest further in applied biosecurity R&D. 

Using R&D to investigate future existential threats is vital to ensuring that innovation contributes towards good work. The COVID pandemic has shown the disruptive impact biological threats can have on the economy. COVID could simply be a dress rehearsal for much more lethal and/or contagious diseases in the future.

The pandemic has also shown that the UK rivals the United States in terms of its bioscience capability, but we currently do not make the most of the expertise we have. Further investment in applied biosecurity R&D would ensure innovation contributes to good work in three main ways: 

  1. Support and create high-quality and meaningful manufacturing jobs that can be spread across the country.
  2. Reduce the risks from infectious diseases to workers, especially in high-contact roles. This will ensure workers themselves remain healthier and better protect those working in frontline services, who have been essential workers during the COVID pandemic.
  3. Reduce economic risks to workers from business closures and other economic disruption caused by infectious diseases. More targeted responses to outbreaks means less disruption of businesses and work, meaning fewer workers will lose their jobs or be vulnerable to further economic disruption from infectious diseases.

Significant investment in biosecurity R&D would shore up Britain’s status as a biosecurity world leader, as showcased by the Oxford / AstraZeneca vaccine.  This investment could include:

  • Launching a multi-million pound prize to incentivise the development of clinical metagenomics, which has the potential to identify new, unexpected pathogens in the first few infected patients, rather than months later.
  • Developing and manufacturing next-generation PPE, which could be used by healthcare staff and the immunocompromised today, and frontline staff across the economy in future outbreaks of highly infectious diseases.
  • Funding trials of novel sterilisation technologies which can make workplaces and the public realm safer for workers and consumers alike, e.g. building on the London Underground’s installation of UV light devices to disinfect escalator handrails.

Investment of this kind would play a key part in ensuring long-term health security, economic prosperity and good work across the United Kingdom.

Never miss the latest policy research and guidance

Get long-term, evidence-based policy recommendations and guidance straight to your inbox.