What Is Artificial Intelligence (AI)?

In September 1955, John McCarthy, a young assistant professor of mathematics at Dartmouth College, boldly proposed(Opens in a new window) that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”

McCarthy called this new field of study “artificial intelligence” and suggested that a two-month effort by a group of 10 scientists could make significant advances in developing machines that could “use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.”

At the time, scientists optimistically believed we would soon have thinking machines doing any work a human could do. Now, nearly seven decades later, advances in computer science and robotics have helped us automate many of the tasks that previously required the physical and cognitive labor of humans. But true artificial intelligence, as McCarthy conceived it, continues to elude us.

What Is AI?

A great challenge with artificial intelligence is that it’s a broad term, and there’s no clear agreement on its definition.

McCarthy had proposed AI would solve problems the way humans do: “The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans,” McCarthy said.

As mentioned, McCarthy proposed AI would solve problems the way humans do: “The ultimate effort is to make computer programs that can solve problems and achieve goals in the world as well as humans,” McCarthy said(Opens in a new window).

Andrew Moore, Dean of Computer Science at Carnegie Mellon University, provided a more modern definition of the term in a 2017 interview with Forbes(Opens in a new window): “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

abstract image indicating ai


(Credit: Getty)

But our understanding of “human intelligence” and our expectations of technology are constantly evolving. Zachary Lipton, the editor of Approximately Correct(Opens in a new window), describes the term AI as “aspirational, a moving target based on those capabilities that humans possess but which machines do not.” In other words, the things we ask of AI change over time.

For instance, In the 1950s, scientists viewed chess and checkers as great challenges for artificial intelligence. But today, very few would consider chess-playing machines to be AI. Computers are already tackling much more complicated problems, including detecting cancer, driving cars, processing voice commands, generating text, and writing computer code.

Narrow AI vs. General AI

Although the first generation of AI scientists and visionaries believed we would eventually be able to create human-level intelligence, several decades of AI research have shown that replicating the complex problem-solving and abstract thinking of the human brain is supremely difficult. 

For one thing, we humans are very good at generalizing knowledge and applying concepts we learn in one field to another. We can also make relatively reliable decisions based on intuition and with little information. Over the years, human-level AI has become known as artificial general intelligence (AGI), or strong AI.

The initial hype and excitement surrounding AI drew interest and funding from government agencies and large companies. It became evident that human-level intelligence was not right around the corner, though, and scientists were hard-pressed to reproduce even the most basic functionalities of the human mind. In the 1970s, unfulfilled promises and expectations eventually led to the “AI winter,” a long period during which public interest and funding in AI faded.

It took many years of innovation and a revolution in deep-learning technology to revive interest in AI. But even now, despite enormous advances in artificial intelligence, none of the current approaches to AI can solve problems in the same way the human mind does, and most experts believe AGI is at least decades away.

stylized human profile in blue and black


(Credit: monsitj/Getty)

On the flip-side, “narrow” or “weak” AI doesn’t aim to reproduce the functionality of the human brain, and instead focuses on optimizing a single task. Narrow AI has already found many real-world applications, such as recognizing faces, transforming audio to text, recommending videos on YouTube, and displaying personalized content in the Facebook News Feed.

Many scientists believe that we will eventually create AGI, but some have a dystopian vision of the age of thinking machines. In 2014, renowned English physicist Stephen Hawking described AI as an existential threat to mankind, warning(Opens in a new window) that “full artificial intelligence could spell the end of the human race.”

In 2015, Y Combinator President Sam Altman and Tesla CEO Elon Musk, two other believers in AGI, co-founded OpenAI, a nonprofit research lab that aims to create artificial general intelligence in a manner that benefits all of humankind. OpenAI has created several popular AI models, including ChatGPT and GPT-4. (Musk has since departed and there is a lot of criticism of OpenAI for becoming a closed, for-profit organization(Opens in a new window).)

Others believe that artificial general intelligence is a pointless goal. “We don’t need to duplicate humans. That’s why I focus on having tools to help us rather than duplicate what we already know how to do. We want humans and machines to partner and do something that they cannot do on their own,” says Peter Norvig(Opens in a new window), Director of Research at Google.

Scientists such as Norvig believe that narrow AI can help automate repetitive and laborious tasks and help humans become more productive. For instance, doctors can use AI algorithms to examine X-ray scans at high speeds, freeing the doctors up to see more patients. Another example of narrow AI is fighting cyberthreats: Security analysts can use AI to find signals of data breaches in the gigabytes of data being transferred through their companies’ networks. A very successful application of AI has arisen—surprisingly—in programming: Developers are using AI assistants such as GitHub Copilot to generate much of their code and to boost their productivity considerably.

Rule-Based AI vs. Machine Learning

Early AI-creation efforts focused on transforming human knowledge and intelligence into static rules. Programmers meticulously wrote code (if-then statements) for every rule that defined the behavior of the AI. The advantage of rule-based AI, which later became known as “good old-fashioned artificial intelligence” (GOFAI), is that humans have full control over the design and behavior of the systems they develop.

Rule-based AI is still very popular in fields where the rules are clearcut. One example is video games(Opens in a new window), in which developers want AI to deliver a predictable user experience.

The problem with GOFAI is that contrary to McCarthy’s initial premise, we can’t precisely describe every aspect of learning and behavior in ways that can be transformed into computer rules. For instance, defining logical rules for recognizing voices and images—a complex feat that humans accomplish instinctively—is one area where classic AI has historically struggled.

city overview with AI overlaid


(Credit: Getty)

An alternative approach to creating artificial intelligence is machine learning. Instead of developing rules for AI manually, machine-learning engineers “train” their models by providing them with a massive amount of samples. The machine-learning algorithm analyzes and finds patterns in the training data, then develops its own behavior. For instance, a machine-learning model could train on large volumes of historical sales data for a company and then make sales forecasts.

Deep learning, a subset of machine learning, has become popular in the past few years. It’s especially good at processing unstructured data such as images, video, audio, and text. For instance, you could create a deep-learning image classifier and train it on millions of available labeled photos, such as the ImageNet dataset(Opens in a new window). The trained AI model will be able to recognize objects in images with an accuracy that often surpasses humans. Advances in deep learning have pushed AI into many complicated and critical domains, including medicine, self-driving cars, and education.

One of the challenges with deep-learning models is that they develop their own behavior based on training data, which makes them not only complex but also opaque: Often, even deep-learning experts have a hard time explaining the decisions and inner workings of the AI models they create.

Examples of Artificial Intelligence

Here are some of the ways AI is bringing tremendous changes to different domains.

Recommended by Our Editors

Self-driving cars: Advances in artificial intelligence have brought us very close to making the decades-long dream of autonomous driving a reality. AI algorithms are one of the main components that enable self-driving cars to make sense of their surroundings, taking in feeds from cameras installed around the vehicle and detecting objects such as roads, traffic signs, other cars, and people.

Digital assistants and smart speakers: Siri, Alexa, Cortana, and Google Assistant use artificial intelligence to transform spoken words to text and map the text to specific commands. AI helps digital assistants make sense of different nuances in spoken language and synthesize human-like voices.

Translation: For many decades, translating text between different languages was a pain point for computers. But deep learning has helped create(Opens in a new window) a revolution in services such as Google Translate. 

Generating text: Recently, large language models (LLMs), a type of deep-learning architecture that specializes in text processing, have sparked much interest in AI. LLMs such as ChatGPT and GPT-4 have been trained on hundreds of billions of words. They can produce human-like text and accomplish difficult tasks such as writing articles, composing poems, and summarizing text. To be clear, AI still has a long way to go before it masters human language. But so far, the advances are spectacular.

Writing software code: Some LLMs specialize in programming. Codex, developed by OpenAI, can produce software code based on a prompt provided by the developer. GitHub has used Codex to create Copilot(Opens in a new window), an AI programming assistant in integrated development environments (IDEs).

Facial recognition: Facial recognition is one of the most popular applications of artificial intelligence. It has many uses, including unlocking your phone, paying with your face, and detecting intruders in your home. But the increasing availability of facial-recognition technology has also given rise to concerns regarding privacy, security, and civil liberties.

Medicine: From detecting skin cancer and analyzing X-rays and MRI scans to providing personalized health tips and managing entire healthcare systems(Opens in a new window), artificial intelligence is becoming a key enabler in healthcare and medicine. AI won’t replace doctors, but it could help bring about better health services, especially in underprivileged areas where AI-powered assistants can take some of the load off the shoulders of the few general practitioners serving large populations.

Generating images: Generative deep-learning models have seen tremendous advances in recent years. Models such as DALL-E, Stable Diffusion, and Midjourney take a textual description as input and create an image that fits the description.

The Future of AI

In our quest to crack the code of AI and create thinking machines, we’ve learned a lot about the meaning of intelligence and reasoning. And thanks to advances in AI, we are accomplishing tasks alongside our computers that were once considered the exclusive domain of the human brain.

Some of the emerging fields where AI is making inroads include music and arts, where AI algorithms are manifesting their own unique kind of creativity. There’s also hope AI will help fight climate change, care for the elderly, and eventually create a utopian future where humans don’t have to work at all.

There’s also fear that AI will cause mass unemployment, disrupt the economic balance, trigger another world war(Opens in a new window), and drive humans into slavery.

We still don’t know the directions AI will take. But as the science and technology of artificial intelligence continues to improve at a steady pace, our expectations and definition of AI will shift, and what we consider AI today might become the mundane functions of tomorrow’s computers.

PCMag Logo This AI robot was made to be your best friend
What’s New Now to get our top stories delivered to your inbox every morning.”,”first_published_at”:”2021-09-30T21:30:40.000000Z”,”published_at”:”2022-08-31T18:35:24.000000Z”,”last_published_at”:”2022-08-31T18:35:20.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:35:24.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.769230769231″>

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0