Elon Musk has been no stranger to headlines, leading cutting-edge companies like SpaceX, Tesla Motors, and SolarCity, and generally working to advance humanity toward more sophisticated technology and bold new socioeconomic concepts. Now, Musk and a host of other leaders in the tech industry have come together to form a new company—one centred on preventing a robot apocalypse.
OpenAI, Musk’s latest venture, doesn’t describe the problem in such ridiculous terms, but does centre on controlling the development of artificial intelligence (AI) in a way that ultimately benefits, and does not threaten, humanity. A non-profit research company, the company strives to remain free from financial obligations to support a more positive impact.
The Dangers of Artificial Intelligence:
Modern AI is developing at an astounding rate. Because new technology makes it possible to create even newer technology at a faster rate, the growth in AI sophistication has been exponential for the past several years. Consumers are enjoying the benefits of advanced AI programs in the form of digital assistants like Siri and Cortana, and even reading online content produced by semantically capable AI programs. Google and Facebook are racing to solve an old AI problem related to the ancient board game Go, and look pretty capable of achieving it.
All these advancements seem innocuous, bordering on frivolous, but the future of AI could be much darker. Intelligent and leading minds, including Musk himself and physicist Stephen Hawking, have explicitly warned against the possibility of AI growing too quickly, eventually learning on its own, with the capacity to make decisions that don’t benefit humanity. The technological singularity, a hypothetical point at which machine learning becomes more advanced than human learning, has been the subject of countless science fiction books and movies, but it’s starting to become a real concern among the most invested and knowledgeable minds in the industry.
Another concern is the potential for AI to be used as a tool to control the masses. Companies like Google and Facebook have enough power and influence to develop AI technology that can manipulate citizens, while average citizens are unable to fight back or even know how they’re being manipulated. Governments, too could theoretically use AI systems to oppress citizens.
How OpenAI Is Preventing Robot Overlords:
Rather than joining the competition with Google and Facebook to solve complex problems and offer bigger, bolder AI solutions, OpenAI is trying to establish universal protocols for those companies to follow to ensure the responsible development and growth of AI sophistication. OpenAI will scour and interpret huge sets of data, including datasets from all companies affiliated with Elon Musk or with YCombinator, one of the biggest startup funding organisations in the country.
All these data sets, in combination with findings and publications from participating researchers, will be made available to the public and syndicated around the world to make AI more of an open source endeavour. Any OpenAI patents will also be “shared with the world,” and the company plans to work actively with other companies pursuing AI development. The goal isn’t necessarily to oversee, regulate, or modify existing paths of development for AI, but rather to publicise as much information as possible and ensure as much transparency and responsibility as possible in the development of new AI systems.
The hypothetical technological singularity may be just around the corner, but if OpenAI is successful in its goals of keeping AI development transparent and accessible, there’s no need for public concern. As AI becomes capable of faster and faster growth, organisations like OpenAI will become even more important, and the future of mankind will rest in how responsibly we can develop our own technologies.