The growth of AI (artificial intelligence) is an exciting yet terrifying prospect. While there may be dramatic productivity gains as it transforms the workplace, there's always the worry of what AI could do in the wrong hands.
The Malicious Use of Artificial Intelligence report examines the potential security threats from AI gone rogue. Authored by a group of 26 contributors from a range of universities and think tanks, the report looks at the ways in which AI is being used now, or likely to be used within the next five years.
It identified three main contexts - digital, political and physical - in which AI is likely to be used maliciously. The obvious dangers revolve around drone warfare, data extraction and hacking, but the report highlights how the increased capabilities of technology could allow for an array of new threats on a much larger scale.
MT scanned the 101 page document and picked out some of the juicier perils.
That’s not Hayley you’re speaking to: The experts warn of the threat from increasingly advanced chatbots that use machine learning to impersonate your contacts via email or messenger apps to extract information. They also warn of the possibility of these bots being able to ‘masquerade visually as another person in these chats’. Possibly racist, too...
Computer says no: Imitating human behaviour (e.g. web navigation), hundreds of thousands of bots flood an online service - in what they term a ‘humanlike denial of service’ - blocking legitimate users and, in the process, making the service less secure. This type of cyberattack could be used to against companies offering digital services or those in possession of online databases.
Fake news: Something you’re familiar with no doubt, but the experts warn that it could become more advanced. They foresee the advent of highly realistic fabricated videos of leaders, CEOs or just about anyone you can think of, making defamatory statements (or worse). If this technology doesn't already exist, it will become more accessible and widespread.
Special delivery: Companies are increasingly adopting pre-programmed bots to perform a number of manual tasks, whether they are used for deliveries, as factory machines or simply cleaning. The experts warn that these are under greater threat of being hijacked and used maliciously; to carry explosives for example.
Manipulation of information availability: Increasingly complex algorithms could be used to to affect the dissemination of certain information. For example, a company's existing clients could be driven towards negative news reports or fake posts portraying the firm's product or conduct in a negative light.
It all sounds rather scary, but it may be a tad premature to head to the hills with a stockpile of guns and baked beans just yet. This isn't exactly The Terminator. Besides, the report is more due diligence than definite threat, and the experts deliver recommendations on how to mitigate the risks.
But while it doesn't require paranoia, AI does require vigilance. Like any powerful tool, it will be as dangerous as the ones that wield it.
Image credit: Shutterstock/Willyam Bradberry