Background
Killed by software
Is it dangerous if your fridge is sentient? American expert Jay Tuck thinks it is. “Devices are not angry or sad: they do their job. But that means they might collide with humans.”
Thursday 9 November 2017

Artificial intelligence is changing our world, and not only for good, argues American journalist, writer and film producer Jay Tuck (1945). In fact: sentient and self-teaching machines can get us into trouble. “If we’re not careful, artificial intelligence will kill us.”

Last week, Tuck gave a talk at the Brave New World technology conference. “Artificial intelligence is software that writes itself and keeps updating. After five or six adjustments, you need a specialist to find out what exactly has changed. We’re losing control. Soon, the machines won’t do what we tell them.”

Tuck experienced the First and Second Gulf Wars at first hand, as a correspondent. He saw the battlefield change. From cruise missiles to smart bombs and – they’ve been around for some time already – killer drones. Those self-guided devices will soon be able to make life-or-death decisions.

Timely tackle 

“That can go very wrong and it’s already nearly happened once. The TALON SWORDS is an American drone with caterpillar tracks, equipped with canon and missiles. They lost control of the machine at a demonstration for generals and other high-ranking officials. The drone’s weapons were aimed at the audience. Luckily, a soldier managed to tackle the thing to the ground like in American football.”

But it’s not just military software that’s dangerous, Tuck warns. “The devices we surround ourselves with are growing smarter and smarter. Stock exchange dealers are not the decisive players in the financial world any more – they’re merely extras in a film. Computer networks conduct the major transactions, so there’s that loss of control again.”

Often, when you discuss disasters with artificial intelligence, people think of The Terminator, the film where computer network Skynet becomes self aware and attempts to destroy mankind. “That’s Hollywood, it’s emotional. Devices with artificial intelligence aren’t angry or sad; they want to follow their orders and adapt to that as well as possible. That’s how they could collide with humans.

Environment

“If we were to use artificial intelligence to deal with our environmental problems, they might decide that you, in your dirty car, should not be allowed on the road", Tuck says.

“And this technology is everywhere. That’s the risk of the internet of things. The fridge and your toaster might start to think for themselves, with terrible consequences. Pulling the plug won’t help. Computers don’t talk primarily to us any more, but to each other. All the information is shared and ends up all over the world.”

What can we do? “That’s a tough one to answer. You could add an ethical component to the self-teaching programs: make them consider the moral implications. But that’s difficult and there’s a risk to that too. The companies and specialists involved in artificial intelligence haven’t come up with the solution yet.”

By Vincent Bongers