Angel or Demon? What Musk, Gates, and 5 Other Technologists Fear About AI

Sam Altman, the CEO of ChatGPT’s parent company OpenAI, admitted last week he is “a little bit scared” of the potential for AI to fundamentally change society. “We’ve got to be careful here,” Altman tells ABC(Opens in a new window), citing the need for regulation and engagement from the public.

It’s a familiar tune. Prominent programmers, researchers, and tech executives have sounded the alarm on artificial intelligence for decades. Yet the technology has found its way into almost every corner of our digital lives, from search engines to self-driving cars and Netflix recommendations.

Here are seven influential figures who have warned us about AI and, in some cases, warmed up to its potential to change our lives.


1. Joseph Weizenbaum (1923-2008)

Joseph Weizenbaum black and white portrait.


Joseph Weizenbaum
(Credit: ullstein bild / Contributor / Getty Images)

Joseph Weizenbaum, who would have celebrated his 100th birthday this year, built one of the first successful chatbots at MIT in 1964. Known as Eliza, it became known around campus as a shockingly humanoid conversationalist, especially one version of the program called Doctor.

“I was surprised how quickly and how very deeply people conversing with Doctor became emotionally involved with the computer and how unequivocally they anthropomorphized it,” he wrote in his 1976 book(Opens in a new window) Computer Power and Human Reason: From Judgment to Calculation. Here’s a sample conversation between a woman and Doctor, acting as a therapist:

  • Human: Men are all alike.

  • System: In what way?

  • They’re always bugging us about something or other.

  • Can you think of a specific example?

  • Well, my boyfriend made me come here.

  • Your boyfriend made you come here.

  • He says I’m depressed much of the time.

  • I am sorry to hear you are depressed.

Weizenbaum warned against the “multiple projects in the United States devoted to enabling computers to understand human speech.” He believed this would result in widespread monitoring of conversations, with the ability to pick out potentially concerning discussions and surface them to government agents.

Moreover, given the cash required to create large-scale speech-processing systems, only government bodies or huge corporations could undertake them, he said. He urged teachers and computer science students to consider whether projects are “morally repugnant” and whether it’s in their best interest to work on them.

“Not only can modern man’s actions affect the whole planet that is his habitat, but they can determine the future of the entire human species,” he wrote. “Man, particularly the scientist and engineer, has responsibilities that transcend his immediate situation.”


2. Stephen Hawking (1942-2018)

Stephen Hawking on stage.


Stephen Hawking in 2016.
(Credit: Bryan Bedder / Stringer / Getty Images)

British-born super genius Stephen Hawking had a uniquely intimate relationship with AI, as it powered the speaking voice he lost to ALS. Not unlike ChatGPT, Hawking’s computerized voice used machine learning to combine its database of the English language with Hawking’s historical speech patterns. With this information, it suggested predictions for his next words, which he scanned and selected with his eyes and by squeezing his cheek muscles.

Yet while Hawking relied on AI, he warned against it, too. “The primitive forms of artificial intelligence we already have have proved very useful, but I think the development of full artificial intelligence could spell the end of the human race,” he said in a 2014 interview(Opens in a new window) with the BBC. “It will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

He later expanded on this in a 2015 Ask Me Anything (AMA) Reddit thread(Opens in a new window). Someone asked Hawking if he agrees with the idea that fears of the AI we see in movies like Terminator are misplaced, as the code has no motive beyond trying to solve problems humans designed it to do.

In response, Hawking said the problem with AI will come in the future when a system can create and solve its own problems. “The real risk with AI isn’t malice but competence,” Hawking said. “A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.”


3. Bill Gates

Bill Gates with a hat in a stadium.


Bill Gates in 2013
(Credit: Will Murray / Contributor / Getty Images)

Microsoft is going all-in on artificial intelligence, with a $10 billion investment in ChatGPT creator OpenAI, and AI-enhanced additions to its Bing search engine and Edge browser. Its long-departed co-founder Bill Gates, however, has reservations about the tech, though he’s come around a bit on some of its benefits.

In a 2015 Reddit AMA, Gates said: “I am in the camp that is concerned about super intelligence.”

“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well,” he said. “A few decades after that, though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

With the rise of ChatGPT, Gates seems more focused on the potential benefits of language-based AI. “It’s pretty stunning that what I’m seeing in AI just in the last 12 months is every bit as important as the PC, the PC with GUI [graphical user interface], or the internet,” he told Forbes(Opens in a new window) last month. “As the four most important milestones in digital technology, this ranks up there.”

When it comes to the risks of AI, Gates said the “near-term issue with AI is a productivity issue. It will make things more productive and that affects the job market. The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.”


4. Meg Whitman

MEg Whitman


Meg Whitman in 2019.
(Credit: Alberto E. Rodriguez / Stringer / Getty Images)

Meg Whitman, the former CEO of both eBay and Hewlett-Packard, took to the stage at Davos 2017 to discuss concerns over AI replacing human jobs at an alarming rate.

“Jobs will be lost, jobs will evolve and this revolution is going to be ageless, it’s going to be classless and it’s going to affect everyone,” said Whitman, as reported(Opens in a new window) by Reuters. “Think about the current technology we have and add robotics, more automation, artificial intelligence.”

At that event, multiple tech leaders discussed the potential for AI to replace work as we know it. “I think what we’re reaching now is a time when we may have to find alternative careers through our lifetime,” said Microsoft CEO Satya Nadella, who now, five years later, has made one of the industry’s biggest pushes into AI by introducing it into Bing search results.

Whitman sees most jobs being lost during the transition to more AI-powered technologies, and says the US may not be equipped to handle the dramatic change. “What do we do with the places where [artificial intelligence] is not going to be an employer?” she said, according(Opens in a new window) to The Wall Street Journal. In her opinion, it’s “up to business and academia to manage the transition” and that the current American educational system is “out of step of what’s required.”

Despite her concerns, one of Whitman’s main focuses of her four-year tenure at HP was developing a stronger presence in artificial intelligence products to compete with the likes of Amazon, Google, and IBM. A month before she stepped down in 2017, HP announced(Opens in a new window) its first artificial intelligence data storage product, and the company maintains its AI focus to this day.


5. Jaron Lanier

Jaron Lanier on stage.


Jaron Lanier in 2013.
(Credit: Chelsea Lauren / Stringer / Getty Images)

Considered a founding father in the field of virtual reality, Jaron Lanier got his start at Atari in 1983, then founded VPL Research, the first company to sell VR goggles. He now speaks out against modern internet culture, particularly the gamification of human interactions.

“We cannot have a society in which if two people wish to communicate the only way it can happen is if it’s financed by a third person who wishes to manipulate them,” he said in a 2018 TED Talk(Opens in a new window). “I don’t believe our species can survive unless we fix this.”

Lanier pushed for a culture around technology “that is so beautiful, so meaningful, so deep, so endlessly creative, so filled with infinite potential that it draws us away from committing mass suicide,” he said in his TED Talk.

He acknowledged that talk of mass death is “a rather horrifying line,” but “I still believe that creativity as an alternative to death is very real and true, maybe the most true thing there is.”

Recommended by Our Editors


6. Elon Musk

Tesla Bot


The ‘Tesla Bot,’ an AI robot Musk is creating to staff the company’s factories.
(Credit: Tesla)

These days, Elon Musk is a big fan of AI. Last year, he said Tesla’s ability to solve AI-based self driving is “really the difference between Tesla being worth a lot of money or worth basically zero.”

That’s a far cry from 2014, though, when he likened AI to “summoning the demon.”

“I don’t think anyone realizes how quickly artificial intelligence is advancing…it’s something that’s detrimental to humanity,” he said during an appearance at MIT. “I’m increasingly inclined to think there should be some regulatory oversight, maybe at the national and international level just to make sure that we don’t do something very foolish.”

In 2017, Musk joined with 100 other AI leaders, including Stephen Hawking, to warn the United Nations about the dangers of AI, particularly as it relates to weaponry and warfare. “As companies building the technologies in Artificial Intelligence and Robotics that may be repurposed to develop autonomous weapons, we feel especially responsible in raising this alarm,” they wrote in a letter.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter added. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Musk hasn’t backtracked on that just yet, but he did unveil a “Tesla Bot” at the company’s second annual AI Day presentation last year. It’s early days, but when it’s ready for action, the bot is intended to assist on factory floors (not the battlefield, hopefully).


7. Timnit Gebru

Timnit Gebru


Timnit Gebru in San Francisco in 2018.
(Credit: Kimberly White / Stringer / Getty Images)

While working at Google in 2020, Timnit Gebru co-authored a paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” The paper highlights the drawbacks of AIs trained on staggering amounts of text data—like ChatGPT—though it was written two years before ChatGPT and focused on the language models used in Google search at the time.

According to a summary(Opens in a new window) from MIT, the paper argues that these types of models ingest a seemingly infinite and ever-expanding set of data from the web, making the presence of racist, sexist, and otherwise abusive language inevitable and nearly impossible to root out. The vastness of the dataset may also make it difficult to bubble up nuances in language that play a role in social justice movements, which may have a small digital footprint.

Similarly, it may fail to understand cultural norms in certain communities or nuance in languages. The researchers cite a 2017 example when Facebook mistranslated(Opens in a new window) a Palestinian man’s post, which said “good morning” in Arabic, as “attack them” in Hebrew, leading to his arrest.

Gebru predicted another problem that’s become apparent with ChatGPT in recent months: large language models are expert at mimicking realistic human speech. It’s easy to fool people and spread misinformation—from faking college papers to generating misinformation about COVID-19 or political candidates.

Although Gebru co-authored the paper with six others, including multiple researchers at Google, she emerged at the center(Opens in a new window) of a controversy that led to her leaving the company (she claims she was forced out). Head of Google AI Research Jeff Dean reportedly said the paper did not “meet the bar” to publish formally under Google’s name, though he wrote in an internal memo(Opens in a new window) that it “surveyed valid concerns with large language models, and in fact many teams at Google are actively working on these issues.”


What Do You Think?

The individuals on this list are both intimately familiar with AI and deeply skeptical of it. Clearly, the pros and cons are limitless and will reveal themselves in time. As we all become more educated on what AI can do, the question is: where do we want to go with it, and why? Let us know in the comments.

What’s New Now to get our top stories delivered to your inbox every morning.”,”first_published_at”:”2021-09-30T21:30:40.000000Z”,”published_at”:”2022-08-31T18:35:24.000000Z”,”last_published_at”:”2022-08-31T18:35:20.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:35:24.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.769230769231″>

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0