Why does everyone have such a downer on killer robots? There seem to be quite a few upsides. Replacing squadies with platoons of plasma rifle-toting terminators could reduce civilian casualties, provide a nice boost for our aerospace and munitions industries and even come in handy in the event of more realistic doomsday scenario, like a zombie apocalypse.
Not everyone subscribes to this view, of course. A dream team of academics and tech business big wigs has written an open letter arguing that bringing AI into warfare would be a ‘bad idea’ and should be prevented by ‘a ban on offensive autonomous weapons beyond meaningful human control’.
Tech legends Elon Musk and Steve Wozniak joined Stephen Hawking, Noam Chomsky, Demis Hassabis, boss of Google’s AI unit Deepmind, and thousands of eminent computer scientists as signatories to the letter, which was drafted by the rather grandly named Future of Life Institute and delivered at an AI conference in Argentina.
A new global arms race, they argue, would be ‘virtually inevitable’ once one power started developing lethal autonomous weapons systems (LAWS, the less snazzy name for killer robots). ‘The endpoint of this technological trajectory is obvious,’ the letter reads. ‘Autonomous weapons will become the Kalashnikovs of tomorrow.’
Aside from decreasing the political cost of going to war, the deadly machines will fall into the hands of terrorists, dictators and warlords through the black market, the letter warns. It didn’t mention the possibility that a Skynet-type AI could use the killer robots to wipe us mere mortals out for its own nefarious purposes, though both Hawking and Musk have spoken of AI itself as an ‘existential threat’ to humanity.
If all this sounds a bit too much like science fiction for your taste, bear in mind that the letter’s writers believe the development of this technology is ‘feasible within years, not decades’. And those guys should know.
The UK and US have rejected calls to ban AI in warfare. ‘At present, we do not see the need for a prohibition on LAWS, as international humanitarian law already provides sufficient regulation for this area,’ the Foreign Office told the Guardian in April, adding that Britain was not developing such systems.
A report last month by the US Army Research Laboratory on warfare in 2050, however, indicates that it may only be a matter of time. It predicts teams of super-humans (equipped with force fields and lasers, incidentally) and autonomous robot swarms to form the armies of the future.
‘The developments are likely to occur because they are critically needed, because humans will simply be unable to keep up with the information flows and the pace of the battle, as they do not have sufficient information-processing capabilities or cognitive bandwidth.’ That is quite creepy actually...
Unfortunately for those who wish to stop the inexorable rise of the machines and prevent human beings being grown in pink gelatinous pods for their bioelectrical energy a la The Matrix, efforts to prevent new military technologies from being used have had mixed results so far. Biological and chemical weapon stockpiling was eventually banned, but not for nuclear weapons.
The letter’s writers hope to persuade the world to nip it in the bud, but they may find their pleas fall on deaf ears. Perhaps it’s time to invest in time travel. That worked for John Connor, right?