Like many internet users, I love schadenfreude and few things have given me more pleasure to read than yesterday’s Twitter whistleblower story.
It’s a veritable feast of terrible things. The company allegedly has half its 500,000 server fleet running an insecure operating system that’s no longer supported by vendors. And the site has allegedly experienced one security incident per week. Delicious!
As thrilling as that is, there are other parts of the story that are a lot less entertaining. An internal report says in no uncertain terms that Twitter is completely unequipped to cope with misinformation and disinformation on its platform. That’s more than a little worrying, especially with the US midterm elections coming up. (In the company’s defense, it says it’s working hard in advance of the midterms, which is good to hear, but this week’s report doesn’t inspire confidence.)
Twitter has been around for 16 years, and its disinformation problem has been extremely obvious for at least six years. I spent nine years on the platform before it ceased to be fun. It’s clear to me that the time has come to bid farewell to the whole thing. Take the site, wrap it in a heavy wet blanket, and drop it into the sea—ideally before it can damage any more elections.
Misinformation Misfires
The Washington Post(Opens in a new window) broke the story about the Twitter whistleblower—former head of Twitter security Peiter Zatko, aka Mudge(Opens in a new window). Along with the whistleblower complaint and a report authored by Zatko outlining the many security problems at Twitter, the Post also included an internal report on Twitter’s attempts to squash mis- and disinformation on the platform. The Post reports that the document was commissioned by Zatko from an outside group, allegedly the Alethea Group.
The report, which is based on employee interviews and examinations of internal documents and processes, underlines the bare minimum effort to curb Twitter mis- and disinformation. It shows a dedicated staff undermined by understaffing, underfunding, and near-pathological need for upper management to do nothing other than squash potentially embarrassing “fires.” From the report:
“Interviewees described a largely reactive approach to misinformation, disinformation, and spam in which action is taken on content and threats only if it is flagged by reporters or news headlines, partners, or political officials due to the lack of people and sufficient tools to do proactive analysis.”
This largely reactive approach created numerous policies in response to a crisis but, the report notes, “with no clear strategy for implementation.”
The internal disconnection at Twitter seems to create situations where Twitter either fails to recognize problems on the platform, or teams simply do not take responsibility because the problems don’t fit narrow definitions. Again, from the report:
“Interviewees described several instances in which Twitter was slow to act on misinformation because teams did not see the topic or narrative as falling under their purview or fitting neatly into a particular threat actor they monitored, such as QAnon or Pizzagate.”
The process of identifying tweets that violate the company’s policies is apparently quite onerous. The report claims an employee needs to use five separate tools to label a single tweet. But even when tweets are labeled correctly as violating policies, it doesn’t actually stop the behavior. The report references another document, not included, that says Twitter’s systems for labeling tweets that violate its policies is impractical and “do not dissuade repeat or malicious behavior.”
Stunningly, the report says that despite being ground zero of numerous disinformation campaigns at a national scale since at least 2016, Twitter has no way to track malicious campaigns and no central repository for information the company accrues about bad actors.
Election Protection
(Photo by Justin Sullivan/Getty Images)
Despite all its failings, the report notes that Twitter went to great lengths during the 2020 US presidential election to avoid a repeat of 2016, when the platform was a morass of election disinformation. The report says that the company dedicated 100 staff members from across different internal teams to serve as an “Election Squad.”
While this may have helped avoid the worst possible outcomes, the report says, “teams had to deprioritize all other work, including work on other critical global events, simply to keep up with the rapid pace of the US election-related content.”
Underlining how unsustainable this approach is, the report notes that “Twitter is unable to provide even a scaled back version of the election support that was deployed for the US 2020 election for the upcoming Japanese election, which has been identified as a priority for the company.” The report is undated, so it’s possible that’s a reference to Japan’s July elections(Opens in a new window).
Recommended by Our Editors
The fact that disinformation wasn’t as prevalent on Twitter in 2020 as in 2016 seems more like a fluke than a win. Twitter’s success in 2020 does more to underline its other failings. For example, COVID-19 disinformation was rampant on the platform, and many officials continue to use Twitter to spread lies about the outcome of the 2020 US presidential election—perhaps because Twitter did not, or could not, muster the forces necessary to address them.
Shut It Down
Disinformation is not an easy problem to handle, and addressing it on a massive, global platform like Twitter is even harder. But if Twitter had its way, disinformation probably would never be solved at all. So much of the efforts chronicled in the report could be interpreted as performative. Putting down only publicly damaging events, creating numerous ineffective policies that cannot be enforced, creating opportunities for the enforcers to pass on responsibility, and continuing to use tactics that are known to be ineffective—these are the actions of a company desperate to avoid making difficult changes.
Twitter calls Zatko’s version of events “a false narrative…riddled with inconsistencies and inaccuracies and lacks important context.” But an internal audit that cites other internal documents and actual employees is hard to dispute. On an instinctual level, the report explains much of Twitter’s more puzzling choices. Why does Twitter prioritize the release of new features no one asked for rather than banning Nazis? Why does it ignore behavior that violates its own policies? Because Twitter isn’t interested in solving its systemic problems.
The people on Twitter (the real people, not the bots, politicians, and disinformation accounts) cannot fix it. Elon Musk won’t fix Twitter. The company itself has proven it’s not up to the task. With a myriad of better, more empowering alternatives out there, and new platforms emerging as excellent gathering spaces for building communities, Twitter has had a good run.
Log off. Shut it down. Pull the plug. Bury it in a peat bog. Hurl it into space. Burn it on a pyre like Beowulf and let the sky swallow the smoke.
And toss Facebook on the pile while you’re at it.
Like What You’re Reading?
Sign up for SecurityWatch newsletter for our top privacy and security stories delivered right to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Hits: 0