Be Careful What You Post: How to Tell the Difference Between Fact and Fake News

Fake news, including online misinformation and disinformation, are dangerous in myriad ways, both in the US and abroad.

People have made themselves sick(Opens in a new window) and avoided vaccines(Opens in a new window) based on false information, while groups like QAnon have spread viral misinformation in the last few years, sometimes leading to violence(Opens in a new window).

More recently, the Justice Department and local officials have expressed concern(Opens in a new window) about potential voter intimidation at ballot drop boxes ahead of the 2022 US mid-term elections.

“We are deeply concerned about the safety of individuals who are exercising their constitutional right to vote and who are lawfully taking their early ballot to a drop box,” Arizona officials said in a statement(Opens in a new window) after two armed individuals dressed in tactical gear set up shop at a ballot drop box in Mesa. “Don’t dress in body armor to intimidate voters as they are legally returning their ballots.”

The drop box effort is not limited to Arizona, the New York Times reports(Opens in a new window), and conspiracy theories about them have been circulating(Opens in a new window) online for several years. People are being falsely told that “ballot mules” are stuffing drop boxes with fake ballots or messing with the boxes themselves. Thus far, there’s no evidence(Opens in a new window) that is the case.

We’ve known to be wary of misleading information on the internet for years now, yet it continues to be a problem—especially around election time. Why does it continue to spread? How can we be better about not sharing it ourselves? There are best practices but no easy solutions.


How to Check Online Information

Online misinformation abounds, especially in an election year. Bei Yu(Opens in a new window), a professor at Syracuse University’s School of Information Studies, urges people to slow down before sharing.

“When I share something to my social network the first thing I train myself to think is ‘Is this information useful to my friend or relative?’ I find that is a very good experience for myself and I also find that I share less after that,” she says.

In the moments before you share an article or social media post, here’s what to consider:

1. Look Out for Strong Emotional Triggers

Does the content leave you feeling outraged? Terrified? Upset? Chances are it was engineered to do just that so you’ll share it and spread the message without even thinking. In a video called “This Video Will Make You Angry(Opens in a new window),” YouTuber CGP Grey likens emotionally charged messages to “thought germs” looking for new brains to infect. Sharing a viral post spreads the message of that post like a sneeze spreads the flu.

Research shows emotional messages spread more widely(Opens in a new window) within our networks because they get more engagement. When those messages are on polarizing issues—like gun control, abortion, or COVID—they tend to stay within networks of people who believe the same thing, creating an echo chamber of increasingly extreme rhetoric. This also means people who need to see fact checks the most can be almost totally cut off from them(Opens in a new window).

Angry messages get shared widely, but so do “feel-good” posts designed to play on impulses other than outrage. This is because the reasons people share bad information(Opens in a new window) vary from outrage to boosting their own self image to informing others. So we might share a story that seems warm and fuzzy because we want others to know about it, or we think it will make us look like better people to those in our network. But these posts can be far more complex(Opens in a new window) than they appear. 

2. Pay Attention to Who Is Sharing the Information

media bias chart


The Interactive Media Bias Chart shows where your favorite outlet sits on the political spectrum.
(Credit: Ad Fontes Media)

Look beyond what is being shared to see who is doing the sharing. Just because you trust the person sharing a piece of news doesn’t mean they’ve done their due diligence. Before you hit share, double-check the information, especially if it’s particularly controversial or outrageous.

This applies to individuals and media outlets. Everyone makes mistakes, but when inaccurate information is published, do the outlets you’re sharing issue corrections or double down on bad info? The Media Bias Chart(Opens in a new window) can be a useful tool to see where an outlet lies on the political spectrum if you’re unsure. 

3. Try to Find Corroborating Stories

If something seems especially crazy, look for additional coverage and read multiple sources. If the article you see has only one source, dig deeper on the publication. Chances are the story may not be true. Do a web search on the article’s author. Read the publication’s about page. Look up the website’s publisher to see what their views are. You might find strong evidence of bias, either in the article itself, the site’s info, or both. 

4. Rule Out Satire or Parody

Check if the article is satire. You don’t want to be the person who shares something from The Onion as if it were fact. Check the site’s page and the comments for clues as to whether a particularly ridiculous or outrageous-sounding article is from a comedy writer, not a journalist. Watch out for those parody Twitter accounts, too.

TrustServista


TrustServista
(Credit: Lance Whitney)

There are a number free tools that can help you investigate a story’s veracity. Install a browser extension like TrustServista, for example, which uses artificial intelligence and other analytics to gauge the trustworthiness of a news article. 

Alex Mahadevan(Opens in a new window), director of Poynter’s MediaWise project, also recommends MediaWise en Español(Opens in a new window) and Factchequeado(Opens in a new window) for Spanish-language fact-checking because “disinformation targeting Latino communities is a big issue that still doesn’t get enough attention,” he says.

You can also go a step further to brush up on your critical thinking skills. Take a course, like Calling Bullshit(Opens in a new window) or How to Spot Misinformation Online(Opens in a new window) from Poynter. There are even games like Fakey(Opens in a new window) designed to help you learn to recognize a fake news piece.

Recommended by Our Editors


Is Content Moderation a Losing Battle?

Social media platforms have built-in systems for flagging content, but it can be like playing whack-a-mole. Ban one word or phrase and people will come up with another term(Opens in a new window). Kick one group off a platform and another will soon take its place. Tag a tweet or post as false or misleading, and the account holder will cry censorship.

That said, these are billion- and trillion-dollar(Opens in a new window) companies we’re talking about. They have the resources to tackle the problem. Critics argue the behemoths like Facebook and Instagram value profit over security, something the companies deny, but a lot of the action has been reactive and with an eye toward not getting in the crosshairs of lawmakers.

Experts, however, are skeptical that allowing social media companies to police themselves will ever work. “I believe in separating content moderation from the platform because I believe platforms conducting their own moderation is a conflict of interest,” says Syracuse Professor Yu. “I think it should be done by a third party.”

Cailin O’Connor, co-author of The Misinformation Age(Opens in a new window), agrees. She says we need an outside entity to regulate the platforms we use every day—like an “EPA for the internet.”

“Social media platforms are removing endless amounts of bots and sock puppets…but I think we need regulation to take them that extra step of the way,” says O’Connor. Especially when it comes to the accounts “that get a huge amount of engagement,” meaning social platforms are “incentivized to leave [them] on even though they’re misleading.”

There is no magic bullet that will remove bad information from the internet entirely, but we aren’t helpless. These same experts think there should be more friction added to the process of sharing information. Social media companies may agree with that. When Twitter served up a prompt that asked people to read stories before re-tweeting them, it resulted in 40% more article opens, the company said in 2020. Facebook tested something similar last year.

But as social media experts told PCMag’s Max Eddy recently, Twitter in particular may know a lot about the problem of misinformation but it’s not necessarily equipped to deal with it.

Coordinated disinformation efforts will always change tactics to evade detection, and we need to adjust with them. “A lot of solutions don’t last forever,” says O’Connor. “Maybe the big picture is just us constantly trying to solve this problem, and that’s okay.”

Tips & Tricks newsletter for expert advice to get the most out of your technology.”,”first_published_at”:”2021-09-30T21:23:24.000000Z”,”published_at”:”2022-08-31T18:37:00.000000Z”,”last_published_at”:”2022-08-31T18:36:55.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:37:00.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.860215053763″>

Like What You’re Reading?

Sign up for Tips & Tricks newsletter for expert advice to get the most out of your technology.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0