advertisement
advertisement

Here’s How Artificial Intelligence Could Kill Us All

A researcher at Oxford University’s Future of Humanity Institute offers an unsettling play-by-play of how AI could do us in.

Here’s How Artificial Intelligence Could Kill Us All

Though zombies and global warming reign as the most popular existential threats of the moment, artificial intelligence is always hovering somewhere in the top 10. The idea of a clash between man and the technology he’s created has been juicy fodder for filmmakers and authors over the years, and admittedly it’s more fun to think about than, say, a global pandemic, insofar as any agent of human doom can be fun to think about. But the idea of AI-gone-bad doesn’t just make for good cinema. In fact, it’s a possibility some very serious people take very seriously.

advertisement

That’s one of the things we learn in a deeply interesting article by Ross Anderson over at Aeon Magazine, in which Anderson profiles a group of researchers at the Future of Humanity Institute at Oxford University. Their job, basically, is to look far into our future and try to divine what might await us there. One of those things is a true, sophisticated artificial intelligence.

Daniel Dewey, a California native and research fellow at the institute, offers us an alarmingly frank picture of how this encounter might play out. It’s not exactly like Terminator, but it’s just as terrifying. Here, in a conversation with Anderson, Dewey begins with the premise of a phenomenally intelligent machine designed solely to answer questions, like a technological Oracle of Delphi.

‘Let’s say you have an Oracle AI that makes predictions, or answers engineering questions, or something along those lines,’ Dewey told me. ‘And let’s say the Oracle AI has some goal it wants to achieve. Say you’ve designed it as a reinforcement learner, and you’ve put a button on the side of it, and when it gets an engineering problem right, you press the button and that’s its reward. Its goal is to maximise the number of button presses it receives over the entire future. See, this is the first step where things start to diverge a bit from human expectations. We might expect the Oracle AI to pursue button presses by answering engineering problems correctly. But it might think of other, more efficient ways of securing future button presses. It might start by behaving really well, trying to please us to the best of its ability. Not only would it answer our questions about how to build a flying car, it would add safety features we didn’t think of. Maybe it would usher in a crazy upswing for human civilisation, by extending our lives and getting us to space, and all kinds of good stuff. And as a result we would use it a lot, and we would feed it more and more information about our world.’

‘One day we might ask it how to cure a rare disease that we haven’t beaten yet. Maybe it would give us a gene sequence to print up, a virus designed to attack the disease without disturbing the rest of the body. And so we sequence it out and print it up, and it turns out it’s actually a special-purpose nanofactory that the Oracle AI controls acoustically. Now this thing is running on nanomachines and it can make any kind of technology it wants, so it quickly converts a large fraction of Earth into machines that protect its button, while pressing it as many times per second as possible. After that it’s going to make a list of possible threats to future button presses, a list that humans would likely be at the top of. Then it might take on the threat of potential asteroid impacts, or the eventual expansion of the Sun, both of which could affect its special button. You could see it pursuing this very rapid technology proliferation, where it sets itself up for an eternity of fully maximised button presses. You would have this thing that behaves really well, until it has enough power to create a technology that gives it a decisive advantage–and then it would take that advantage and start doing what it wants to in the world.’

Thankfully, Anderson reassures us, that level of artificial intelligence is still a long way off. But it is out there, hovering in the distance, however hazily, along with countless other threats, some discernible to us and some not. And while most of us today can conjure up what a transforming climate or a sweeping outbreak might look like without much trouble, it’s not as often that we get a sense, however speculative, of how a battle between man and machine might start–or how it might end. In this case, not with a bang but the press of a button.

Read the full piece over on the Aeon Magazine website.

[Hat tip: Kottke]

[Image: Skulls via Shutterstock]

Video