Why Elon Musk Fears A.I.
Elon Musk is usually far from a technological pessimist. From electric cars to Mars colonies, he’s made his name by insisting that the future can get here faster.
But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014, he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”
He reiterated those fears in an interview published with Recode’s Kara Swisher, though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”
To many people — even many machine learning researchers — an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We’re still struggling to solve even simple-seeming problems with machine learning. Self-driving cars have an extremely hard time under unusual conditions because many things that come instinctively to humans — anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road — are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.
Musk is hardly alone in sounding the alarm, though. AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Musk that AI could be very dangerous. They are concerned that we’re eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.
If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details of their approaches, but agree on one thing: We should be doing more research.
Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it. He expanded on this idea in the interview with Swisher; the bolded comments are Swisher’s questions:
My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight. Spends a year or two gaining insight about AI or other technologies that are maybe dangerous, but especially AI. And then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.
You think that — do you see that happening?
I do not.
You do not. And do you then continue to think that Google —
No, to the best of my knowledge, this is not occurring.
Do you think that Google and Facebook continue to have too much power in this? That’s why you started OpenAI and other things.
Yeah, OpenAI was about the democratization of AI power. So that’s why OpenAI was created as a nonprofit foundation, to ensure that AI power ... or to reduce the probability that AI power would be monopolized.
Which it’s being?
There is a very strong concentration of AI power, and especially at Google/DeepMind. And I have very high regard for Larry Page and Demis Hassabis, but I do think that there’s value to some independent oversight.
From Musk’s perspective, here’s what is going on: Researchers — especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo and AlphaZero — are eagerly working toward complex and powerful AI systems. Since some people aren’t convinced that AI is dangerous, they’re not holding the organizations working on it to high enough standards of accountability and caution.
“We don’t want to learn from our mistakes” with AIMax Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.”
In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom, at Oxford, made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”
In that respect, AI deployment is like a rocket launch: Everything has to be done exactly right before we hit “go,” as we can’t rely on our ability to make even tiny corrections later. Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.
That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said, “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”
“Right,” Musk concurred.
In context, then, Musk’s AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism — a belief in the exceptional transformative potential of AI. It’s precisely the people who expect AI to make the biggest splash who’ve concluded that working to get ahead of it should be one of our urgent priorities.
See you all tomorrow.
Buh-bye.
Comments
Post a Comment