Elon Musk and dozens of other technology leaders have called on AI labs to pause the development of systems that can compete with human-level intelligence.
In an open letter from the Future of Life Institute, signed by Musk, Apple
related investing news
“Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter read.
“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
The letter added, “Such decisions must not be delegated to unelected tech leaders.”
The Future of Life Institute is a nonprofit organization based in Cambridge, Massachusetts, that campaigns for the responsible and ethical development of artificial intelligence. Its founders include MIT cosmologist Max Tegmark and Skype co-founder Jaan Tallinn.
The organization has previously gotten the likes of Musk and Google-owned AI lab DeepMind to promise never to develop lethal autonomous weapons systems.
The institute said it was calling on all AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
GPT-4, which was released earlier this month, is thought to be far more advanced than its predecessor GPT-3.
“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” it added.
ChatGPT, the viral AI chatbot, has stunned researchers with its ability to produce humanlike responses to user prompts. By January, ChatGPT had amassed 100 million monthly active users only two months into its launch, making it the fastest-growing consumer application in history.
The technology is trained on huge amounts of data from the internet, and has been used to create everything from poetry in the style of William Shakespeare to drafting legal opinions on court cases.
But AI ethicists have also raised concerns with potential abuses of the technology, such as plagiarism and misinformation.
In the Future of Life Institute letter, technology leaders and academics said AI systems with human-competitive intelligences poses “profound risks to society and humanity.”
“AI research and development should be refocused on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal,” they said.
OpenAI was not immediately available for comment when contacted by CNBC.
OpenAI, which is backed by Microsoft, reportedly received a $10 billion investment from the Redmond, Washington technology giant. Microsoft has also integrated the company’s GPT natural language processing technology into its Bing search engine to make it more conversational.
Google subsequently announced its own competing conversational AI product for consumers, called Google Bard.
Musk has previously said he thinks AI represents one of the “biggest risks” to civilization.
The Teslaleft OpenAI’s board in 2018 and no longer holds a stake in the company.
He has criticized the organization a number of times recently, saying he believes it is diverging from its original purpose.
Regulators are also racing to get a handle on AI tools as the technology is advancing at a rapid pace. On Wednesday, the U.K. government published a white paper on AI, deferring to different regulators to supervise the use of AI tools in their respective sectors by applying existing laws.
WATCH: OpenAI says its GPT-4 model can beat 90% of humans on the SAT
Leave a Reply