There are plenty of things in today’s world that divide opinion - veganism, social media, Marmite - but Artificial Intelligence (AI) may have the widest range of commentary of any of these controversial topics. While Silicon Valley and co. are ecstatic that AI is finally becoming more than clunky guess-work, there are a lot of people who have been paying attention to cinema over the last few decades, and are understandably terrified of the robot apocalypse.
This fear is nothing new. When the spinning jenny made yarn production easier and faster, when conveyor belts made factories 100 times more productive, and when automated machines took over the most intricate processes, the spinners, workers, and fine motor experts were outraged that their jobs would be replaced by an object.
But what happened to these people? The spinners became mill workers, factory workers became mechanics or overseers, and the fine motor specialists turned their hands to maintaining the robots. In the end, the work ‘taken’ by machines was replaced by a new and different job entirely, and everyone forgot they were ever worried in the first place.
This is why the constant race to innovate and improve the human experience can never be a bad thing - because we always build on what we already know, we use it to create new things and new knowledge. In order to survive in the age of machine learning, we need to emphasize that which makes us essentially human: the ability to generalise, to find common ground between extremes, and to apply what we know to entirely new situations.
Let’s take education as a starting point. Because AI is designed as a tool to make intensive, drawn-out tasks easier and more efficient for humans, education must move away from creating specialists who are great at handling specific processes.
While producing ‘Jacks of all trades’ is not necessarily the answer, the human ability to extrapolate and generalise should be channelled into teaching students how to manage projects. By doing so, students will be trained to bring together diverse knowledge and skills using their emotional intelligence (something which AI cannot replicate) to help orchestrate the process-driven work that computers will handle.
Rather than replacing the workforce, AI will supplement it. This is partly because our robot co-workers will still rely heavily on back-propagation, an algorithmic technique that uses acquired knowledge to solve problems based on previous experience - but that also has to start learning from scratch when asked to pick up a bottle instead of a cup.
This aspect of AI makes it ideally suited to those ‘middle-ground’ jobs between manual labour and specialist ‘thought’ jobs. Construction, data processing, and even case law can all be handled fairly well by AI machines already, whereas architecture, real-life data application (in social services, for example) and courtroom negotiations are still a long way off.
Far from a metallic James Cameron hellscape, the future of humanity will be intertwined with AI, just as our lives are now intrinsically linked to the internet. The current shift towards hiring people for their attitude and aptitude, rather than a fixed set of skills, is laying the groundwork for this exact situation, whereby ‘human work’ will require adaptability, management and orchestration of complex processes, and fine-tuning what an AI programme has almost got right.
Just like the industrial revolutions of yore that forced us to build different, more refined skills, this technological revolution will push us into entirely new jobs and a different working life, one that is more abstract, more strategic, and more inherently human - leaving AI to pick up the slack.
Charles Towers-Clark is the CEO of Pod Group, a provider of IoT connectivity & billing software. His upcoming book ‘Fire the CEO’ will discuss the impact of AI on the future of work and the need to change our perspective to cope with this changing world.
Image credit: Rosenfeld Media/Flickr