×
Eng

, .
preview
Юлія Даниленко
Юлія Даниленко головна редакторка SPEKA
8 July 2022 14 minutes reading

«Orwell got one thing wrong». Australian professor on Russian AI landmines, surveillance state and killer robots

Toby Walsh is a professor of artificial intelligence at the University of New South Wales, Australia, the author of three books on the future of AI and one of the acclaimed experts in the field. He was also recently forbidden to ever enter Russia again for dismantling their claim that Russian AI landmines can distinguish between soldiers and civilians. 

Toby and I spoke for the first time back in 2016 – he wrote a piece for the media I worked at the time about robots becoming so smart they could turn deadly. In 2022 Mr. Walsh got banned from Russia because he pointed out its mines are not as smart as the Kremlin tries to argue. 

And that's exactly what makes them deadly, not to mention both immoral and illegal. 

– First of all, let me congratulate you on getting on Russia's ban list.

That's the club I'm proud to be a part of. There were 120 Australians banned (the list had 121 names, but one person got banned twice – ed.), that's quite a lot of people. 

Interestingly, it shows outside opinion still matters for Russia, otherwise why would they bother? 

– The thing that got you banned was the critique of Russian AI-operated POM3 mines, which Russia claims can differentiate between civilians and military personnel. 

These claims are bullshit. These mines have been taken apart, and we know they have an only sensor – a seismic one. Basically, this sensor measures vibrations in the ground so it can go off before people actually step on it. It also has a huge destructive range. 

The idea that you can distinguish between enemy combatants, civilians and friendly combatants with seismic information is ridiculous. And it's an example where they try to justify the use of a barbaric mine, the mine that caused so much fear and suffering during the WWII and Vietnam war, and which is actually banned by the Ottawa treaty. 

This treaty was signed by more than 160 countries, including Ukraine, because they are designed to cause individuals harm, in a very nasty way. 

Though Russia, of course, never signed it. 

– This case raises even a bigger concern – the use of AI weapons and their regulation. Which ones should be banned, how and when AI can and should be used in military tech, especially when it comes to destructive weapons. 

I have spoken to the United Nations on this topic about half a dozen times.  Human Rights Watch and Stop Killer Robots have invited me to various meetings, the CCW convention in particular, where these matters were discussed. Just like the majority of my colleagues, AI researchers, I am very concerned about the misuse of AI in warfare. 

There's actually good things you can use AI for. In Australia we're building a mine-clearing robot. That's a perfect example of the task a robot can do. It gets blown up, you just go and buy yourself another robot. No one has to lose their life or limb ever again to clear a mine. 

But the idea that we would hand over the decision of killing to machines – in particular, identification, tracking and destruction of humans – takes us morally, legally and technically to a very dark place. 

I have consistently campaigned against it for the last six years. 

– The plot of The Terminator starts in a similar way… 

Comparing it to The Terminator gives people the idea that these machines are much more advanced than they are. It's actually quite simple. We already see drones becoming more and more autonomous. Drones in Libya and Syria used facial recognition technology. Turkey is a major military power in large part because they've been pioneering the use of rather low-cost and increasingly sophisticated drones that constantly become more autonomous. It's started to transform the way we fight wars. 

Just look at how Russian cruiser Moskva got sunk – apparently there was a drone distracting the ship, allowing the missile to reach it. And we're seeing how drones are used to direct artillery, to gather information. They are starting to play a major role in wars, changing the balance of power. 

– Can you name other examples of the benign use of AI, besides the de-mining drones? 

Another example – we'll never have to lose another life delivering supplies to contested territories. We can load them onto a drone or autonomous truck and those will do the job. 

Logistics is not as sexy as killing people, but it's one of the best examples. Getting people food, equipment and bullets to the right place at the right time. When Operations Desert Storm and Desert Shield happened some 20 years ago, it was the use of the computers and AI that allowed the planning and shipping of soldiers, hundreds of aircraft, dozens of ships, all the ammunition and everything else. 

The fleet oiler USNS ANDREW J. HIGGINS conducts an underway replenishment during Operation Desert Shield. Photo: NARA The fleet oiler USNS ANDREW J. HIGGINS conducts an underway replenishment during Operation Desert Shield. Photo: NARA

So there are good examples, as well as troubling ones. There are a lot of questions from a technical perspective about whether these drones will abide by international humanitarian law, then there are legal questions, and then there are moral questions. Surveys from all around the world show that many people find such weapons repugnant. 

The UN Secretary had actually called this out. He said, let's call it what it is – these weapons are morally repugnant and should be banned. And there are various weapons we have decided to ban that we've found repugnant in the past. We've found chemical weapons repugnant, we've found bio-weapons repugnant, we're now beginning to find nuclear weapons are quite repugnant, too. We regulate those. 

And it's quite obvious the public will find these weapons repugnant, too. The idea that a machine could try to decide who lives or not is the one we would find challenging morally. 

– If we decide to ban these weapons, how do we enforce it? 

A good question. The answer is to look at how we enforced the bans of other morally repugnant weapons. Let's take chemical weapons, for example. 

Chemical weapons do not require a high technical sophistication, and of course we can not uninvent chemistry that goes into them. You can go to the swimming pool section of your local hardware store and start building a very rudimentary chemical weapon. 

But since these weapons are banned, arms dealers can't sell them, at least not openly. So they are not widely available. It had not eliminated chemical weapons completely – there still are cases like in Syria where they were still used, often against the civilian population. But when that happen, the world condemns it – there are articles in the NYT, UN resolutions, economic and other sanctions are introduced against those who violated the rules. And that has been relatively effective in limiting the use of chemical weapons. We can hope for the same outcome with AI weapons.

We won't be able to uninvent the technology – it's already there. But we can limit its use. 

What troubles me though, is that with most types of these weapons we are usually only able to ban them after they have been used. It's only after the terrible use of chemical weapons in World War I that chemical weapons were banned. So what keeps me awake at night is the thought that we won't have the conviction to ban AI weapons before they are actually used. 

– Were there any cases of weapons being banned preemptively? 

There was, in fact, the technology that we have the foresight to ban preemptively. And that was, ironically, the blinding laser

Two companies, one Chinese, and another from the US, announced that they were going to develop blinding lasers. And thanks to the ban they were not able to finish and start selling them. And neither in Syria nor in Ukraine there are people blinded by lasers. 

There are already tons of horrific ways to fight wars, we don't need to invent new ones, especially the ones involving blinding people on the battlefield. 

– Where do you draw the line with the AI weapons? 

You have to work with what you can realistically get the people in the UN to agree on. There are already some AI-coordinated weapons, like the aforementioned mines. But they are not actually smart: they detect footsteps and blow up if a person gets into a certain radius. 

The sophisticated machines can be allowed to recognize the profile of a tank and hopefully distinguish it from the ambulance. But the idea that they can be able to recognize individual humans – and some Turkish drones are supposedly able to use facial recognition already to identify particular humans – is somewhere where we can hope to find consensus. 

The ban of AI identifying, tracking and destroying human targets is what we can get the United Nation to agree on. 

– The idea of a drone flying and killing a certain person in the middle of the street in broad daylight is chilling. 

It would take us to a very dangerous world. There was already an attempt to kill the president of Venezuela with a drone – although not an autonomous one, we believe it was flown by a human. Nevertheless, it's only a matter of time.

That is a very destabilizing weapon that can be used against high-profile targets – the President of Ukraine, of Russia. Although the latter is more worried about being poisoned by his own side. 

– And all the technology needed is already there: autonomous drones exist, and so does facial recognition technology. Add black market into the recipe, and you got yourself quite a poisonous mixture. 

That's the same face recognition technology your smartphone uses. Put it on a drone, and it will be able to track a specific person if you program it that way. They are basically one tweak of software away from keeping track of who unfollowed you to being programmed to run into ones that did. 

There's an interesting video my colleague Stuart Russell made, called Staughterbots. It paints the scenario of the world we might very soon find ourselves in. The video is quite terrifying, and when we premiered it in the UN, they said we were being melodramatic. Soon after that there was a drone assasination attempt in Venezuela. 

The use of facial recognition technology is probably one of the most controversial topics right now. 

It's true, and many of its uses I'm staunchly against. You can see certain authoritarian states misusing the technology already. China does it, Russia does it for tracking people, too. We're getting closer to the world George Orwell told us about. 

– His 24/7 surveillance idea had always seemed exaggerated to me in the past, but now I'm not so sure. 

Orwell got one thing wrong. It's not the Big Brother, who was human, watching people through the television. Its computers that are able to watch people at the scale. 

East Germany was a perfect example of what happens when you get people to watch people: you need at least 1/3rd of your population constantly watching another 2/3rds

But it doesn't scale very well, and of course you cannot trust these people to constantly tell you the truth. Computers scale perfectly. 

That's the terrifying thing about computers: you can surveil the whole population, and in China we can see it starting to happen. They have an algorithm ominously named Skynet that can scan 1 billion faces in a minute. 

Big Skynet is watching you. At least in China  Big Skynet is watching you. At least in China 

– Another worrying use of AI is cyber warfare. 

That's a very natural playground where AI is going to be increasingly used. Not against the military, but against the civilian infrastructure. I've always said we'll know when World War III starts, because the Internet will stop, all the hospitals, power stations, electrical grids and banks are going to stop working. 

Ukraine is defending itself in cyberspace against Russian hackers all the time. Unfortunately, those hacks are going to become more and more sophisticated and fast. Because that's not going to be humans behind them, and AI algorithms instead. 

And the only way to defend against these lightning speed attacks would be to have a smarter, faster AI yourself. 

– How will AI shape the world in the coming years – will it be the force for good or quite the opposite? And how can we ensure it does not become our undoing? 

Well, I encourage people to read my new book, "Machines Behaving Badly', that covers these topics. But the message of that book and the message I would share with your readers is that it's entirely up to us. 

Technology is not destiny, technology is what we choose it to be. And we can choose it to be the force for good or the force for oppression. It's about where we let technology into our lives – and where we don't. 

There are both good and bad things you can do with exactly the same algorithms. It's like fire: we can't uninvent it, and there are good things that come from fire – like cooking food and heating – and bad ones, like wildfires and bombs. 

What separates one from the other is a choice. 

0