© Twitter
Twitter is cracking down on the spread of "misleading" content ahead of the US election, as users attempting to retweet anything similar to the NYPost's Hunter Biden leaks will now get a warning the material is "disputed."
Attempting to retweet offending content will trigger a prompt warning the user that the material they're trying to post is "disputed," Twitter revealed on Friday, posting an image of the new warning screen.
The user will be able to click a button to "find out more" about why Twitter doesn't want the material shared, and then presumably post it anyway.
The new feature arrives in the aftermath of a major controversy arising from Twitter's censorship of a series of stories critical of Democratic presidential candidate Joe Biden, published by the
New York Post based on emails supposedly extracted from his son Hunter's laptop.
Conservative politicians, lawmakers and press have slammed Twitter for prohibiting users from even linking to the story, and some users who shared details of the stories found themselves locked out of their account for reasons that ranged from sharing "hacked material" to posting "personal information" without permission.
The social media giant's guidelines for what is considered "misleading" are themselves somewhat nebulous, having grown since the start of the Covid-19 pandemic to encompass not just "disinformation" but also "disputed" content, a vague descriptor that could apply to most of what users post on the platform.
The Republican National Committee on Friday filed a complaint with the Federal Election Commission charging Twitter had illegally meddled on behalf of the Biden campaign when it squelched the spread of the Biden-laptop stories.
The platform had even briefly blocked a link to the US Congress website, when Republicans on the House Judiciary Committee attempted to skirt Twitter's censorship by reposting one of the banned articles on their official .gov page. Twitter also apologized to its users for a prolonged outage on Thursday night, which left many speculating about whether the platform was testing an intensified form of censorship ahead of November's elections. The site blamed a "system change initiated earlier than planned" that had "affect[ed] most of our servers" - an explanation which likely did little to put conspiracy theories to rest.
Facebook subsidiary Instagram rolled out a feature similar to Twitter's 'wrongthink warning' last year, which alerts users when they are about to post something "potentially offensive." In April, Facebook began alerting users as to whether they'd shared, replied to, or otherwise interacted with posts that were later deemed to be "misinformation," specifically content concerning the novel coronavirus that had been "debunked" by the World Health Organization.
In June, the platform further expanded its wrongthink-alert system, warning users when they attempted to share articles that were over 90 days old - regardless of whether they were true or not.
Facebook has also resorted to somewhat more subtle tactics of "shadowbanning" - which on Wednesday was once again confirmed by its communications chief Andy Stone, a former Democratic Party staffer. He tweeted that the platform was restricting the spread of the New York Post's story until the platform's fact-checkers could stamp their own judgment on the material.
Comment: The Federalist weighs in on Twitter's mad scramble to recover some credibility: