SpaceX CEO Elon Musk and Peter Thiel Launch OpenAI
One of this year’s hottest topics was, surprisingly, artificial intelligence (AI). Billionaire tech visionaries SpaceX CEO Elon Musk and Palantir founder Peter Thiel have been talking a lot about the prospect of super-intelligent computers, even while companies like Alphabet Inc and Facebook, Inc. have been working on AI technology. (Source: “How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over,” BackChannel, December 11, 2015.)
When I say artificial intelligence, please don’t think of a robot; they are not the same. AI is not a physical being like the machines in Terminator, but rather a type of software that is capable of learning and improving its own processes based on what it’s learned. Or at least that’s the narrowest definition of AI. As time wears on, the software will grow smarter and perhaps expand past its original design. That’s what Elon Musk, Peter Thiel, and a bunch of other billionaires are worried about.
It’s almost guaranteed that AI software will grow much smarter, but right now, the technology is being developed at firms like Google and Facebook, which means its direction is bound by corporate interests. There’s no one keeping a watch out for the long-term safety of humankind, so Elon Musk thought he’d add that task to his plate. After all, the guy is only running three companies.
Elon Musk Warned About AI Development
What happens if AI software grows and falls into the wrong hands? What happens if the software itself becomes uncooperative? Those may seem like science-fiction plots, but they are valid questions in the murky field of artificial intelligence.
Last year, scholars at Oxford University’s Future of Life Institute published an open letter warning about the dangers of AI development. Signed by thousands of scientists and researchers from around the world, the letter urged companies to exercise caution in their pursuit for more intelligent software. (Source: “An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence,” Future of Life Institute, January 23, 2015.)
At Google, AI powers the search engine, the translate function, and many components of the “Android” operating system. Likewise, Facebook drives its new virtual assistant software, “M,” with an exceptional AI algorithm.
Both of these companies are innocent of any wrongdoing, but that would be little comfort if AI programs grew out of their control. To reduce the likelihood of that happening, Elon Musk and his friends are launching OpenAI, a non-profit research center dedicated to artificial intelligence.
The 501c(3) will publish its results online, patent-free. Anyone wishing to look at the software or rework it into a new program is free to do so, because Elon Musk understands that there’s no stopping the wheels of progress. The creation of smarter AI is inevitable; all we can do is make sure it isn’t dangerous.
Musk: “Everyone Should Have AI”
According to a recent interview with Elon Musk, the purpose of OpenAI is to democratize the power of AI software: “I think the best defense against the misuse of AI is to empower as many people as possible to have AI,” said Musk. “If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower.” (Source: “How Elon Musk and Y Combinator Plan to Stop Computers From Taking Over,” BackChannel, December 11, 2015.)
With a cash pile bordering on $1.0 billion and some of the smartest minds in the world, OpenAI will have the resources to keep up with Google and Facebook. However, the most important thing is that it has humanity’s interests at heart.
Structuring the organization as a non-profit was a stroke of genius from Musk and his group of tech visionaries. They had no incentive to disclose such a profitable piece of technology, but they did it anyways, because making a brighter future matters to them.
With so much chaos and conflict in the world, it’s comforting to see people like Elon Musk working on a better tomorrow.