Will AI make us redundant?

LONG READ: As machines become increasingly capable of outperforming humans, should we fear the future? MT investigates both the threat and promise of artificial intelligence

by Adam Gale
Last Updated: 10 May 2017

A monolith modelled on the Tower of Babel casts a long shadow over the dystopian city in Fritz Lang's Metropolis. Like much of the film, it stands as a warning against hubris: it is possible to be too clever for your own good.

In 1927, the big idea for the future was robots causing havoc among the toiling classes. Since then, the prophecies have darkened still. In the 'fourth industrial revolution', they say, automation will eliminate the need for human workers altogether, triggering an employment Armageddon. It's tempting to imagine this will involve hoards of terrifying, chrome-finished androids with Austrian accents, but the future's coming out of Silicon Valley, not Hollywood. Most likely, the machine that will take your job won't be a Terminator, but an algorithm.

Artificial intelligence has been the subject of much hype since the 1950s, when a group of eminent scientists met at Dartmouth College to figure out how to perform tasks traditionally associated with human intelligence, in this case using abstractions and natural language. They thought it would take two months. Since then, the prognosticators have learned their lesson, usually saying that machine minds will outstrip us 15 or 20 years in the future, just far enough to be plausible, but close enough to tantalise.

Despite the clear history of missed expectations, now is a uniquely good time to be a true believer. The hype's as strong as ever, except today there's money behind it. Last year, global funding for AI start-ups was $5bn according to CB Insights, nearly nine times higher than in 2012, but businesses of all shapes and sizes are starting to pay attention. 'We've never seen the introduction of a technology that goes from curiosity to application in months instead of years. For how many years have we been talking about the cloud or big data? Even then, lots of clients are close to the cloud but very few have moved critical applications. In AI we're going from "tell me about what it is" to "let's do this big project" in six months,' says Nicola Morini, global head of Accenture's AI practice.

The consultancy set it up in late 2016, after years of doing AI work 'on the side', and has found demand to be ravenous. 'There are organisations now thinking of moving into a completely different business segment as a result of AI. I've been at Accenture for 20 years, and I don't remember a CEO wanting to know in detail about a technology on the same level. Last week, we had two clients back to back with their whole board of directors asking what they can do with it. AI has escaped the tech box and is now part of the business conversation,' says Morini.

For an idea why everyone's so excited by AI, go back to 9 March 2016. That's when AlphaGo, the creation of Google-owned DeepMind in London, first defeated Lee Sedol, a world champion at the ancient Chinese game of Go. Big deal, you may say, remembering IBM's Deep Blue defeating Garry Kasparov at chess in 1997, only there's a fundamental difference. Chess is a game that suits machines. If they can muster enough processing power, they can calculate all the possible moves and triumph by brute force. Go is different. Its permutations are nearly infinite. Players must win by experience, judgement and intuition - and that's exactly what AlphaGo did. It learned the game by playing against itself, thousands and thousands of times. It developed what might be called 'gut instinct'.

The new face of AI

Unlike Maria in Metropolis, above, the AI that will change the world is effectively invisible. Image credit: Friedrich-Wilhelm-Murnau-Stiftung, Wiesbaden, Germany

AlphaGo is an example of a new, powerful form of AI called deep learning, which uses many-layered neural networks modelled on a human brain. Deep learning is itself a subset of machine learning, which fundamentally differs from classical, rules-based AI such as a simple search engine algorithm. 'Machine learning is about creating systems that learn from data, so you don't have to code it entirely,' explains Jerome Pesenti, CEO of London start-up BenevolentTech and former team leader for IBM's Watson, the clever machine famous for beating all-comers at the US game show Jeopardy!. 'You don't have to tell it what action it has to take at every turn, it will induce that from a set of data.'

The implications of learning machines are profound, and not just for Go players. As these systems try their virtual hands at more and more skilled tasks, they threaten to profoundly alter the nature of our work and society. But it's easy to get carried away. For a more grounded idea of how AI will shape the future, you need to look at what's it actually doing today.

Its most obvious arena is, of course, the internet. If you've noticed that Google's image search is coming up with better suggestions these days, it's because the tech giant supposedly spent a cool $1bn 'training' its machine learning system to recognise images. 'If you have good data, lots of images out there that are labelled - this is a forest, this is lake and so on - then you can create systems that make pretty good decisions,' explains Pesenti.

That's handy, but it's hardly going to change the world. AI's got a better shot with sounds. Machine learning has been deployed in two interrelated areas: voice recognition, where the best systems now have the same accuracy rate as humans, and natural language processing, the ability to understand language as we use it.

This has implications beyond asking virtual assistants Siri or Alexa to book your haircut for you. It means the next time you call up a business, you might not need to speak to a person at all. Call centres and customer service are ripe for automation, and already telcos and others are putting AI on the front line.

Go champion Lee Sodol takes on the mechanical mastermind, AlphaGo. Image credit: Getty Images

Machine-learning AI promises to automate many mundane, repetitive tasks. It won't necessarily do them better, but it will do them faster and eventually much, much cheaper.

You may think this sounds ideal - a model employee, tireless, never rude, always shows up on time - but before you replace your receptionist with someone more efficient and with a little more deterrence value (a Dalek, say), bear in mind there are downsides too. If more and more low-skilled jobs are replaced by machines, what do the people who used to work there now do, especially as every other similar job is also being digitised? And where does that leave the middle manager? It doesn't require an MBA to press an 'on' switch, after all.

This seems to point to a future of mass low-skilled unemployment, huge inequality and very probably some form of universal basic income, yet it's not clear that automation of customer service and the like is categorically different from any other labour-saving technological advance, to which we have always adapted.

Indeed, it's not certain that it will result in mass layoffs at all. At TalkTalk, for instance, there is a hybrid AI call centre system, with humans behind the scenes in case you prefer not to speak to an algorithm, or have an unusual request. 'There are clearly pockets where cost savings are the driver,' admits Nils Lenke, senior director of corporate research at Nuance, which provides TalkTalk's call centre technology, as well as AI in areas such as biometrics and medicine. 'But what we've seen is that the total workforce in customer service has not gone down. When you automate the more mundane tasks, you can free up resources at a level that wasn't possible before.'

The same could be said for healthcare, where AI is being developed that can digest thousands of scientific papers for researchers in an instant, automate medical record systems or even point out possible misdiagnoses. The idea is that AI augments humans, making us both more efficient and more effective. 'You don't want doctors to be spending six hours a day on the computer, you want them to be spending time with patients,' explains Lenke.

This offers a more positive outcome. Imagine a world where the boring and dangerous tasks were done for you, leaving you to do meaningful, unhurried work that interests you. In such a vision, AI is essentially a tool, like a hammer. Humans became better and much more productive builders once we invented the hammer, but that didn't reduce the number of builders, it just meant we built more stuff. In the same way, we'll expect better healthcare, not fewer doctors.

The end of the professions?

But what if you look a little longer term? What if the hammer just doesn't need you anymore? Already, there are signs that the digital is beginning to outperform rather than just augment the analogue in areas that require skill.

'Increasingly capable machines and systems are taking on more and more of the tasks that we used to think were the exclusive territory of human professionals - for example, medical diagnosis, legal document drafting, tax planning, designing buildings, writing earnings reports and auditing company accounts,' says Richard Susskind, IT adviser to the Lord Chief Justice of England and Wales, and co-author of The Future of the Professions. 'The pace of change is accelerating.'

This is normally when the middle classes start spluttering about the indispensability of their professional judgement and creativity. Surely these things will still stand between us and the very long line at the soup kitchen, administered by the towering AIs that have taken all our other jobs?

'It is still early days, but we are already seeing evidence in medicine and law, for example, of machines that can solve problems for which human creativity or judgment might be thought by most people to be essential. The big point here is that "judgment" and "creativity" are the tools that humans use to solve difficult problems. Machines will solve these same problems using different techniques - brute force processing, huge amounts of data, remarkable algorithms. We should not assume that the challenge here is to get machines to replicate the way that humans work,' says Susskind.

Take Chinese lending app Yongqianbao, which figures out how much of a credit risk you are by looking at such obscure things as how often you charge your phone. People would never think to do that, but AI can find 'hidden structures' in the data.

Susskind's prognosis looks bright for consumers and businesses, but decidedly bleak for employees: 'The 2020s will be the decade not of unemployment but of redeployment - professionals will need to retool and retrain to build the systems that will in due course replace them. In the 2030s and 2040s, it is hard to imagine that we will need or want professionals and the professions in their current form.'

Why professions must act to remain relevant

However, even where machines do tasks better than us, it won't necessarily mean we'll be for the scrapheap. There are some areas where AI has advanced beyond our capabilities, without resulting in mass job losses. Look at retail, where AI firms such as Blue Yonder anticipate sales, figure out dynamic pricing and accurately predict what clothes people are going to buy next season. Recently, Morrisons announced it had been trialling Blue Yonder's predictive sales technology to stock its stores for over a year in all but fresh food, resulting in 30% fewer gaps on the shelf. Effectively, it sold more with less effort, and it did so by replacing human judgement.

'Altogether this is something like 20 million decisions per day that are completely automated across 500 stores. We take into account historical sales, whether there's a promotion or a holiday, which day of the week, the weather, whether it's school holidays, competitor prices. There's a long list of variables from which the AI algorithms can calculate probability distributions,' says Professor Michael Feindt, founder of Blue Yonder and former particle physicist at CERN. 'A human can have some experience, maybe uses three or four effects, but it's completely impossible for a human to have a hold on the non-linear effects and the interplay between these factors. The AI algorithms are simply way better.'

Bye bye procurement manager then? Not quite. 'We have not seen that our customers throw out lots of people. The main thing is that the work changes. The store manager often had to do one to two hours each day ordering so many of this and so many of that. They've said they're so happy they don't have to guess any more, because it was hard,' says Feindt.

So what's the role humans will now play in ordering stock? Feeding the algorithm accurate data, and entering guesses into the system, to improve its predictions. They can also 'care more about the customer and making sure the location looks good,' Feindt adds.

In this final vision of the future, it is the humans that augment the AI. On the one hand, this seems like the best of both worlds - we still have jobs and businesses are able to perform better. Yet, deskilled, reduced to polishing the machines that took our place, with nothing to challenge us, we'd lose our purpose. Yes, we may be able to adapt, constantly retraining to match the dwindling demand for skilled human labour, but it's in the nature of self-learning machines to change faster than people can catch up. And if there aren't any junior roles because AI has automated them, how will we acquire skills in the first place?

But what if it's all hype?

Mass unemployment, hollowed out professions, machines augmenting people or people supporting machines? Only time will tell what the future of AI will bring. How we see it is largely a question of temperament. To the optimist, it means abundance, essential services that function properly, time for the finer things in life. To the pessimist, it's purposelessness, vicious inequality and living off handouts.

In any case, most people see it as a question of when, not if. But what if they're wrong? Consider the background of AI's disciples. They're normally experts in a narrow field (a category that's notoriously poor at predicting the future), they often have something to sell (so they would say it's the next big thing), and they almost always have an unshakeable faith in the transformative power of technology. Not everyone, however, is convinced that faith is built on solid foundations.

'A lot of the marketing coming out of Silicon Valley is based on the premise that if we just gather enough data and write good enough algorithms and have enough computing power, then we can be replaced by machines. I'm sorry but that's not going to happen,' says Christian Madsbjerg, author of Sensemaking: The Power of the Humanities in the Age of the Algorithm. 'There's a philosophical flaw at the heart of it. It started with Francis Bacon, who believed that if you just gather enough data, truth will fall out. That was wrong 400 years ago and it's wrong today.'

It is true that so far all the instances where machine learning has far outstripped people have been in repetitive tasks involving access to massive amounts of machine-readable data. But some things don't happen often enough to get the quantity of data needed (AI will find it much easier to calculate prices for bread than fine art), while other things just aren't quantifiable.

'One of the big problems with deep and machine learning is flexibility. If the data isn't good and the system produces mistakes, how do you fix that problem? Usually the answer is more training data, but if you try more data and it doesn't fix it, you're pretty screwed,' says Tim Furche, Oxford academic and co-founder of data extraction start-up Wrapidity. Think Microsoft's foul-mouthed, racist chatbot Tay, which the tech giant had to pull after it picked up its bad habits from Twitter users. Lifelike? Unfortunately yes. A job rival? Not by a long shot.


When people talk about AI, they usually mean software that can 'think' about a broad range of issues - artificial general intelligence. They might even talk of 'the singularity', the moment when a machine mind becomes more intelligent than the human mind, conscious even, before ascending to Godlike intellect.

Do our experts see it happening?

Pesenti: 'There are good and bad questions about ethics. A bad question is: when are machines going to take over the world? That's not going to happen, it's a complete distraction. I do believe that the capabilities of machines will keep improving until at some point it will match humans. But I also believe that by the time we get there our society will be completely different. It will evolve, but in a way humans want it to evolve.'

Furche: 'There's not any immediate danger of any AI we are currently developing becoming sentient or anything like that. It's still very, very far off.'

Lenke: 'There are isolated tasks where they are already better than we are, but once a neural network is trained it can do exactly that one task. We use the same brain to do all these tasks, seeing how these problems are related. I don't really see it happening that machines will get more intelligent than we are, quickly.'

Rather than letting the machine-learning systems loose without supervision, Furche prefers an approach that combines traditional, rule-based AI - and therefore more human input - with machine learning, but getting the right human element is also a challenge. 'What takes a long time is that you need people who can figure out the most promising ways to approach a particular problem with the toolset of machine learning. It's really hard to get it quickly because you have to see how it performs in reality.'

As firms get more experience, as big data proliferates with the Internet of Things and as machine learning researchers figure out ways of reducing the amount of training data required in the first place, these barriers will subside.

The unmistakable silhouette of Silicon Valley

Of course, the people who have the most experience also have the best technology, the most data and the most spare cash. The US tech giants - Facebook, Google, Amazon, Apple, Microsoft, IBM, Intel - are in an AI arms race of colossal proportions. Indeed, they're so far advanced, it's becoming increasingly hard to see how smaller companies can compete. 'At least 80% of the start-ups I've seen are focused on acquisitions. They're not actually building viable companies, they're meant to be acquired by the big guys - and the big guys are buying AI companies like crazy now,' says Furche, who recently sold Wrapidity to US software-as-a-service company Meltwater, where he's now Senior Product Owner.

Since 2012, there have been 200 acquisitions of private companies using AI across different business verticals, according to CB Insights, with over 30 in the first quarter of 2017 alone. Google nabbed 11 big ones, including DeepMind in 2014, with Apple on seven and Facebook on five. Less obvious players include the automotive giants, attempting to compete in one big area of AI research, driverless cars - a technology that promises to ease traffic, reduce road deaths and cost thousands of professional drivers their jobs - with Ford forking out $1bn on a joint venture with Argo AI in February.

The tech giants' hunger for talent is taking them into unusual hunting grounds: universities, where they're trying to hire the world's best minds. 'I've never seen something as strong as this in my life. I have a book from 2012 about reinforcement learning with about 20 articles inside from academics at different universities. They all work at Google now,' says Feindt. 'The big American companies are really in the lead. The other parts of the world should take care that they don't completely fall behind.'

In many respects, it's not the machines taking our jobs or taking over the world that we should be worried about, but the data scientists. The people behind AI are using ingenious methods to approach tasks that the rest of us have spent decades or centuries pondering and, we thought, perfecting. They may well do these things better, but are they doing the right thing?

To hear the disciples of Silicon Valley talk, technology will take us to the Promised Land, a world of abundance, connection and progress that we've never seen before. But right now they are taking everyone there, whether they want to go or not. It may be time to sit down - businesses, politicians and anyone who has an interest in society - and think long and hard about the direction we're going in, and the kind of society we want. That's one decision that no algorithm can make for us.

Want to know more about our digital future? There's still time to get tickets to MT's Future of Work event, June 15th in London, where you can hear from the likes of John Lewis chairman Sir Charlie Mayfield, CIPD boss Peter Cheese and Bruce Daisley, VP Europe at Twitter.

Main image credit: Alamy


Find this article useful?

Get more great articles like this in your inbox every lunchtime

What radio can teach leaders about the metaverse

"TV didn't kill radio. The Metaverse won't replace reality," says the CEO of ad agency...

Managers who are honest about failure make better leaders

Podcaster and author Elizabeth Day urges organisations to be more open about mistakes

“You are not going to get better by accident”

5 minutes with… Rachel Cook, managing director at digital design agency Thompson, who rose through...

More women on boards is key to improving employee satisfaction

Want to boost employee satisfaction within your organisation? Get more women onto the board of...

WTF is a WFH uniform?

Opinion: Dictating what your workers wear is a great way to tell them not to...

Activist investors: helping or harming?

Engineer turned activist investor Mark van Baal argues activist investors can help major oil and...