Yes, We Can Regulate Online Toxicity: Here’s a Roadmap

At age 16, Chris Nasca took his own life(Opens in a new window). Trying to understand why, his mother looked through his still-active TikTok account. He had searched for terms such as basketball and weightlifting. But the videos that surfaced for him on the platform featured much darker topics: despair and suicide.

Chris’ parents are now suing(Opens in a new window) TikTok parent company ByteDance for serving up harmful content “to maximize user engagement and increase profits.”

Social media corporations—from TikTok to Meta and Twitter—are controlled by a select few, and they have largely resisted regulation. We are asked to trust them anyway, to believe they care about their users and have our best interests at heart, despite repeated breaches of public trust.

In 2021, The Wall Street Journal reported that Meta’s own research found that its Instagram platform was harming the mental health(Opens in a new window) of some users, particularly teenage girls. When lawmakers and academics attempted to get that data, the company reportedly provided incomplete data. (Meta accused the Journal of “cherry-picking selective quotes from individual pieces of leaked material.”)

On Twitter, hate speech spiked after Elon Musk acquired the company, according to researchers. (Musk has sued over those reports.) But the fabled “Twitter pile-on” predates Musk.

On TikTok, meanwhile, people say the algorithm creates a doom loop of being fed objectionable content, reporting it, but then continuing to see it because TikTok interprets time spent on the video reporting it as active engagement. Instead, people end up playing whack-a-mole trying to block associated hashtags(Opens in a new window).

Thus far, social media companies have walked away with slaps on the wrist. Yes, Meta had to pay out several billion dollars, but that’s a drop in the bucket for a company that made $116 billion(Opens in a new window) in 2022 alone.

The US Federal Trade Commission goes after anticompetitive, deceptive, and unfair business practices. But that’s across the board, not just for social media or even technology companies. We need a new institution dedicated to mitigating the toxicity of the online world—a Digital Environmental Protection Agency (DEPA).


How to Build a Digital EPA

Given Congress’ inaction on privacy legislation, creating a DEPA wouldn’t be easy. Lobbyists would flock to Capitol Hill, PR pros would revive the tried-and-true “stifling innovation” argument, and tech CEOs would smile and nod before Congress while pushing to kill it behind the scenes(Opens in a new window).

DEPA architects would have their work cut out for them, too. Their solution couldn’t infringe on free speech and would ideally be free from political and corporate bias.

Ilia Tretiakov(Opens in a new window) spent several years working in communications for the Canadian government before starting his own content-creation agency, So Good Digital. When I asked him who should run the DEPA, he proposed “a mix of academics with backgrounds in technology, legal specialists, ethicists, and lawmakers”—including those who have worked at social media firms.

young man upset looking at phone

(Credit: Getty Images / Ivan Pantic)

“Ideally, I think this regulatory agency would be an independent commission of experts,”  Tretiakov said. “These individuals … should [be chosen] based on merit and expertise in the field, because social media has grown so much. We’re talking about actual technology experts. We’re talking about privacy experts, in the case of Google and AI.”

But what good is a commission of experts with no enforcement power? A DEPA would probably require the weight of the federal government behind it to impose meaningful punishment on companies that break the rules—a balancing act that would be difficult to pull off, to put it mildly.


The government, as the representative of the public interest, cannot be a spectator to this new economy.

“A completely independent agency with the authority to impose penalties could be powerful, but it may also be considered an overreach on private enterprise,” according to Tretiakov, “especially if the regulatory standards are set too high.”

Lawmakers could play a role here, but they should be in the minority and have real knowledge of how the internet, social media platforms, and online misinformation actually work.


We Already Have a Roadmap

Andrew Tutt discussed similar ideas in his 2016 paper An FDA For Algorithms(Opens in a new window), and many others have written on why we need a new institution to regulate the internet.

In a 2020 paper for the Harvard Kennedy School’s Shorenstein Center, entitled New Digital Realities; New Oversight Solutions(Opens in a new window), former FCC Chairman Tom Wheeler, Phil Verveer, and Gene Kimmelman outline what they call the Digital Platform Agency (DPA).

The DPA would adapt old-school regulatory principles to the digital marketplace, separate from existing regulators such as the FCC and FTC.

“Continuing to rely on a handful of dominant digital companies to not only make the rules but also drive the economy can no longer work,” they argued. “The government, as the representative of the public interest, cannot be a spectator to this new economy.” 

child gazing at computer screen

(Credit: Getty Images / Rebecca Nelson)

According to the paper, the DPA would combine elements such as antitrust law with two important but common law principles: the duty of care and the duty to deal.

The duty of care is the obligation of a provider to mitigate the negative consequences of partaking in that good or service. Automakers are the most obvious example here, as they have a responsibility to build safe cars.

Other countries are already taking steps in this direction when it comes to tech giants. The UK’s Online Safety Bill(Opens in a new window) would hold social media companies to the duty-of-care standard if passed, though critics say it’s in danger of encroaching too much on free speech in its current form.


Companies including Meta, Twitter, and Alphabet would have an obligation to use their control over public data for the public good.

The second important principle, the duty to deal, says providers of an essential service must ensure equal and impartial access to that service. In 1860, that principle was applied to telegraph lines, stipulating that messages be transmitted impartially, in the order in which they were received. The Harvard paper contends that requiring tech companies to abide by the duty to deal would prevent them from using their monopoly on data assets to control innovation in the market by buying up or silencing competitors, thus applying the “duty to deal” to the internet.

Combined with those principles would be the concept of the information fiduciary(Opens in a new window), which was developed by UC Davis Professor Jack Balkin. He contends that the massive amounts of personal data we provide to online companies is a public resource and thus comes with a duty of care, meaning it must be handled in a way that’s best for the public good.

Philip Napoli(Opens in a new window), professor at Duke University’s Sanford School of Public Policy and author of Social Media and the Public Interest: Media Regulation in the Disinformation Age(Opens in a new window), also says we should treat social media platforms as information fiduciaries.

“If the answer to ‘who owns our user data’ is in fact us, then we can make the argument that these aggregations of user data are a public resource, and that provides a pathway…to saying that actually, these platforms have an obligation to serve the public interest,” said Napoli.

Of course, the DEPA would have to accomplish all this regulation while also preserving our right to free speech. Napoli proposed an approach that would hold companies accountable to certain standards, rather than one that micromanages what users can and can’t say on the platforms. The DEPA could, for example, require that a certain number of staff people or a percentage of total employees be devoted to such practices as mitigating misinformation and harmful content.

Social media companies have said this kind of regulation would kill innovation, hinder their ability to make a profit, and set the US behind other countries such as China. But that doesn’t have to be the case, as long as the DEPA’s mandate is applied only to companies with a certain share of market dominance. Companies including Meta, Twitter, and Alphabet would have an obligation to use their control over public data for the public good. 

Recommended by Our Editors

This policy could, in fact, promote competition. When a few companies control the market and stifle or buy out any competitor that could challenge them, that doesn’t foster innovation — it supports the status quo. As the Harvard paper puts it, “the dominant companies that grew out of dorm rooms and garages today choke off the ability of new innovators to do the same thing.”

All that aside, if harm is being done—and there are plenty of instances showing it has been—the obligation to mitigate that harm outweighs whatever slight impact those measures might have to a tech giant’s profits. What is forgotten in discussions over regulation is that even if Meta or Alphabet took stringent measures, hired scores of researchers, and invested in AI to help weed out misinformation and harm, those companies would still be raking in billions in net profit per year. The argument that requiring these companies to follow the same rules as other, similar industries would somehow kill innovation and bankrupt them is objectively ridiculous.


Government Regulators Can Only Do So Much

Most people think government regulators already handle this type of thing. And that’s true—to a point. The FCC and FTC have the power to issue fines, but the rules agencies use for internet companies were designed for a different era with different technology. 

The FCC, for example, was created in 1934 to regulate telegraph messages and radio waves. It later added the internet to its purview. According to Napoli, the FCC had more authority during the Obama years, but it lost some of that under Trump when the internet was classified as an information service and not a telecommunications service.

young upset woman looking at phone

(Credit: Getty Images / AntonioGuillem)

Section 230(Opens in a new window) of the Communications Decency Act is often cited as a tool the FCC has to wield its authority against social media platforms that get out of line. It states that social media companies aren’t liable for harmful information shared on their platforms as long as they make “a good faith effort to remove it in accordance with the law.”

But Section 230 doesn’t define what “good faith” actually means, nor does it incentivize platforms to remove bad information. It also doesn’t grant the FCC any authority over the platforms in question, since that would conflict with free speech.


Social media companies aren’t going to invest the time or money in making their platforms safer because it isn’t in their best interest.

Since the FTC regulates business practices writ large, it can dictate some requirements for what people on the internet can do—when it involves making money. Those YouTube videos with a “contains sponsored content” pop-up? That’s an FTC requirement. 

As Devin Coldewey wrote for TechCrunch(Opens in a new window), “The agency’s most relevant responsibility to the social media world is that of enforcing the truthfulness of material claims.” The FCC fined Facebook $5 billion because the company misrepresented its handling of people’s data, not because there are any specific regulations that apply to social media platforms. As Coldewey also points out, the FTC is largely a reactive agency that can’t do anything unless there’s a problem. When it does act, it moves at a bureaucratic pace that’s far outstripped by the speed of the internet. 

So while both commissions do set up guardrails for social media companies, they can’t and don’t directly regulate them. We need a DEPA: a regulator solely focused on the digital world that can act both proactively and reactively to fill in the current gaps.


It’s Past Time for Something New

We often hear social media CEOs talk about how hard it is to do what people are asking; how difficult it would be to mitigate hate speech and misinformation when literally billions of posts go up each day. And that’s a fair point. 

But these leaders have some of the most powerful technology in the world at their disposal. They have the money to hire an army of fact checkers. They can put serious effort into redesigning their algorithms so they won’t drive engagement, regardless of the content. Most social media companies haven’t done these things.

The choices these companies make tell us where their priorities actually lie. According to its 2022 fourth-quarter report, Meta spent about $1 billion a month(Opens in a new window) on the metaverse. That money could’ve gone toward investing in a safer platform, more staff, or really, any number of things more productive than a crappy VR world nobody asked for. They aren’t going to invest the time or money in making their platforms safer because it isn’t in their best interest.

Creating an agency like the DEPA would be difficult, but it’s possible, and it’s sorely needed. Implemented correctly, it could be a way to set standards for social media companies to adhere to, monitor for compliance to those standards, and enforce punishments if they fall short. A digital EPA could also work together with social media giants to build an experience that is truly safer for everyone. If tech companies care as much about their users as they say they do, they should welcome the chance to make that happen.

What’s New Now to get our top stories delivered to your inbox every morning.”,”first_published_at”:”2021-09-30T21:30:40.000000Z”,”published_at”:”2022-08-31T18:35:24.000000Z”,”last_published_at”:”2022-08-31T18:35:20.000000Z”,”created_at”:null,”updated_at”:”2022-08-31T18:35:24.000000Z”})” x-show=”showEmailSignUp()” class=”rounded bg-gray-lightest text-center md:px-32 md:py-8 p-4 mt-8 container-xs” readability=”30.769230769231″>

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

Facebook Comments Box

Hits: 0