On Tuesday, Jan. 7, Mark Zuckerberg, the founder of Facebook (and CEO of it’s parent company Meta, which owns other social media platforms like Instagram and Threads), announced that the platform will stop fact-checking posts made on the platform in order to “get back to their roots,” as he put it. Facebook had implemented the fact-checking feature in December 2016, in order to “identify and address viral misinformation, particularly clear hoaxes that have no basis in fact.”
The move comes hot on the heels of Donald Trump’s victory in the 2024 presidential election. Who can say whether or not the election influenced the decision — although the timing is suggestive — and Donald Trump did once, in his book “Save America,” threaten to imprison the Meta CEO if he attempted to use his position to sway the results of the 2024 election.
The platform went on to also amend its Hateful Conduct Policy, making it less restrictive to posts that may have previously been classified as hate speech.
Whatever the motivations, these steps have far-reaching implications for social media. I have written in the past about the potential of these platforms to contribute to the spread of misinformation or otherwise promote harmful behavior.
To be sure, Facebook’s fact-checking wasn’t perfect, nor were their anti-hate-speech policies – but they provided an important bulwark against misinformation and at the very least may have helped to prompt some critical thinking when people were confronted with more questionable posts.
Critics stated that the measures interfered with people’s right to free speech, and perhaps there’s merit to that statement from an objective sense. However, Facebook — and other social media platforms — are privately held and managed sites. They are free to police and regulate what they tolerate on their platforms as per the terms of service. That works both ways — they’re under no obligation to just let people say whatever they want, and they are likewise under no obligation to police the platform except in their own self-interest.
The question then becomes one of philosophy and duty to the public interest, and therein is the rub. For whatever reason, people tend to arbitrarily assign greater integrity to things they read online, especially if that information looks like it comes from an authoritative source or even if it just agrees with a personal bias or “seems” right. Malicious actors have been demonstrated to take advantage of that tendency to push a particular narrative, for whatever reason. Studies have shown that misinformation has a clear effect on public opinion and peoples’ perceptions of a given issue.
Facebook and Instagram have 3 billion and 2 billion users a month respectively — these are not small platforms. They also depend on ad dollars to help keep the lights on, vast as they are, and most large advertisers aren’t happy when their products are featured alongside vitriolic discussions laced with hate speech or propaganda — so one needs to ask: why was this done now, and to what end?
The narratives we see spring up over the next six months or so on social media will be informative.