A former employee of Google who has resigned said a new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities. Laura Nolan was a former top Google software engineer. He resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.
Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons. Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”. There is no suggestion that Google is involved in the development of autonomous weapons systems.
Last month a UN panel of government experts debated autonomous weapons and found Google to be eschewing AI for use in weapons systems and engaging in best practice. Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once.
What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed. “There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.” Google recruited Nolan, a computer science graduate from Trinity College Dublin, to work on Project Maven in 2017 after she had been employed by the tech giant for four years, becoming one of its top software engineers in Ireland.