Misinformation and Changes on Twitter
Recently, the social media giant Twitter announced a brand new policy that’s aimed at reducing the spread of viral and misleading information through tweets on the platform. The policy aimed at crisis misinformation is focused on any allegations that are made during a time of global or national crisis, and any false content that’s shared during that time. It also includes more specific changes that are going to be shifting the way that the platform’s users interact with Twitter. According to the company, once it has evidence that someone is sharing a claim that’s false, it won’t be recommending the content on the Home timeline, Explore, and Search tabs, to potentially reduce the harm that the false information can have.
The changes and statements from Twitter were made shortly after Elon Musk stated that the users of the platform are being manipulated by its algorithm in a tweet. He elaborated that the platform has an option where users could choose to see the latest tweets that have been shared by the people they follow, but that’s not the default option. Musk has been engaged in a turbulent purchase of Twitter over the last few months, and has also accused the platform of having a high number of fake accounts and bots. Twitter made a statement clarifying the number of fake accounts and bots, saying that it was relatively low, however, Musk replied to that statement with an explanation that he would be conducting his own research.
The announcement about the policy changes from Twitter also acknowledged the harm that can potentially be caused by algorithmically amplified content shared with the platform’s 229 million active daily users. According to the company, this potential harm is especially noteworthy during crisis times and is not limited to Twitter itself, as most social media platforms use certain algorithms to amplify content to their users. The company also defined a time of crisis as a situation when there might be “a widespread threat to life, physical safety, health, or basic subsistence”. Furthermore, according to the company, the content that’s been flagged as false or misleading isn’t necessarily going to be removed, but the company would end up hiding it behind an overlay that would be informing its users that the content might not be factual, and giving them the choice of whether they want to see it themselves or not. Finally, the company also explained that the new misinformation policy wouldn’t be covering any personal anecdotes or accounts from a first person, any efforts to fact check or debunk information or strong commentary.