In 1930, one of Britain’s most influential thinkers made a prediction that might make you laugh, or cry.
John Maynard Keynes forecast that within about 100 years the biggest challenge facing people would be how to use their “freedom from pressing economic cares, and how to occupy the leisure” time brought about by massive leaps in technological progress.
Keynes’ prediction is pretty far off for the 2,000 workers that file into the giant Amazon warehouse in Avonmouth everyday.
AI-powered algorithms determine targets that are used to squeeze them for every possible ounce of productivity. The catch is, you don’t really know what the target is, or how it is calculated. Your performance is also ranked against your colleagues.
For fear of being disciplined, each worker works harder. Like greyhounds chasing a plastic rabbit on the track, the harder they work the further the target moves away. This might be doable if you are young and fit. But if you are pregnant, have a disability or are older you are less likely to be able to keep up. An automated process will flag you up for ‘performance management’.
The setting and enforcing of targets is nothing new. But ‘algorithmic management’ further intensifies the control over workers through a lack of transparency, AI-driven micromanagement and automated dismissal processes where finding a human to complain to can be very difficult. This is occurring not just in places like Amazon, but in office jobs, recruitment processes and increasingly throughout the economy.
“The worry is not that we’re going to be taken over by robots, but that we are treated like robots,” Hetan Shah, the CEO of the research organisation the British Academy, said at a talk at Kings University Institute for AI.
New technologies, age-old questions
Yet it is the fear of being taken over by robots – or at least them stealing our jobs, that have captured our attention and generated breathless headlines. So-called generative AI technology, like ChatGPT, has growing capabilities that can contribute to the deskilling of workers, or ‘taking’ of jobs as we know them.
With just a few seconds of someone’s (or a blend of anyone’s) voice, generative AI can create a whole audiobook. It is being used for basic legal tasks. It can cheaply and rapidly spit out computer code, marketing copy or customer service support.
But just because these technologies can do this, doesn’t mean they inevitably will. AI can also support the quality of working life, by reducing boring and routine tasks and freeing up time and headspace for more rewarding work.
These are new technologies, but age-old questions persist. Who owns it, who benefits, who is impacted and what are the rules of its use?
At a recent parliamentary event organised by campaign group Connected by Data and the TUC, a Royal Mail worker and CWU union member said we need to ensure technology can be used “for the benefit of everyone, not just [as] a whip for those who wield it”.
Unions can provide the strong collective voice and counterbalance needed to a policy agenda dominated by a powerful set of companies. The movement is starting to gain traction, with excellent work by the TUC and others.
This ranges from ensuring people are legally and technically empowered to negotiate with employers, to insisting on reskilling workers for the digital age. It also embraced the historical role of unions that helped ensure the rewards created by technological development are shared. First we won weekends, now how about the four-day week?
An urgent moment
But there is an urgency to this moment, as well as opportunity.
Under the cover of being ‘pro-innovation’, the government is dismantling key aspects of the law that provide already insufficient protections from AI-related harms. A new bill making its way through Parliament will reduce the rights for workers to know what tech they are subject to, weaken their ability to shape it and limit human oversight over automated decisions such as hiring, firing or performance. The government’s recent AI White Paper set out a more or less ‘let it rip and see what happens’ approach to regulation.
Meanwhile, in a bid for international standing, the prime minister, Rishi Sunak is cueing up an Autumn ‘global summit on AI safety’ that threatens to exclude voices from civil society. The summit risks an excessive focus on the speculative threats of super intelligent machines that are no longer controlled by humans, while issues such as workers’ rights go overlooked. But a movement to counter this is growing.
The campaign for the next general election will start shortly after. Locally, Bristol North West’s MP Darren Jones is a key figure in the debate, arguing for the need to embrace AI while putting guardrails around it to protect citizens. The exact way this should happen is still to be worked out.
What we do know is that as with the previous industrial revolution, AI’s benefits will only be shared if organised workers gain a seat at the table at every stage. This means at the highest levels of regulation, through to each workplace. And by helping to ensure the power of AI helps to build a just society, not just a society building AI.
Adam Cantwell-Corn, who co-founded the Cable, is a senior campaigns and policy officer at Connected by Data, a campaign to ensure communities have a powerful say in how data and AI changes our world.