OpenAI’s Latest Goal: Prevent ‘Superintelligent’ AI From Killing Us All

It’s unclear if AI will one day match and surpass human intelligence, but ChatGPT creator OpenAI is already starting to research ways to prevent a so-called “superintelligent AI” from going rogue and potentially killing us all. 

The threat may sound like a science-fiction movie plot. But OpenAI today warned about the risks of superintelligent AI systems—or computers that will be significantly smarter than humans. 

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” the company wrote in a blog post(Opens in a new window). “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

OpenAI predicts that superintelligent AI systems will emerge within “this decade.” It’s a rather bold prediction since the company’s ChatGPT app isn’t considered to be an artificial general intelligence (AGI) capable of human-level thought. 

Instead, ChatGPT functions as a large language model similar to autocomplete that can re-create human-like responses, but without fully understanding the meaning behind the words. Hence, it can sometimes make up answers that are obviously wrong or fail to comprehend basic logic(Opens in a new window). So it may seem strange that OpenAI is already predicting the arrival of superintelligent AI. Nevertheless, the San Francisco lab wants to prepare.

“Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue,” the company said. “Our current techniques for aligning AI, such as reinforcement learning from human feedback(Opens in a new window), rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us.”

Indeed, a superintelligent AI could react and issue commands far faster than any human could, making it perhaps impossible to control in real-time. That means OpenAI needs “new scientific and technical breakthroughs” to rein in the hypothetical AI system. To do so, the company is creating a new team focused on “superalignment research, and dedicating 20% of OpenAI’s computing resources to the effort. 

Recommended by Our Editors

The team’s current approach will focus on creating a “a roughly human-level automated alignment researcher,” or what’s essentially an AI program designed to —ironically— oversee the future superintelligent computer. “We can then use vast amounts of compute to scale our efforts, and iteratively align superintelligence,” the company said.

OpenAI is aiming to “solve the core technical challenges of superintelligence alignment” in the next four years. “While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” the company said.

Its research will also be shared with the public so that other AI companies can potentially use it.

What’s New Now to get our top stories delivered to your inbox every morning.”,”first_published_at”:”2021-09-30T21:30:40.000000Z”,”published_at”:”2022-08-31T18:35:24.000000Z”,”last_published_at”:”2022-08-31T18:35:20.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:35:24.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.769230769231″>

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0