Social platforms have a responsibility to address misinformation as a systemic problem, instead of reacting to case after case.
A network of Facebook troll accounts operated by the Myanmar military parrots hateful rhetoric against Rohingya Muslims. Viral misinformation runs rampant on WhatsApp in Brazil, even as marketing firms there buy databases of phone numbers in order to spam voters with right-wing messaging. Homegrown campaigns spread partisan lies in the United States.
The public knows about each of these incitements because of reporting by news organizations. Social media misinformation is becoming a newsroom beat in and of itself, as journalists find themselves acting as unpaid content moderators for these platforms.
It’s not just reporters, either. Academic researchers and self-taught vigilantes alike scour through networks of misinformation on social media platforms, their findings prompting — or sometimes, failing to prompt — the takedown of propaganda.
It’s the latest iteration of a journalistic cottage industry that started out by simply comparing and contrasting questionable moderation decisions — the censorship of a legitimate news article, perhaps, or an example of terrorist propaganda left untouched. Over time, the stakes have become greater and greater. Once upon a time, the big Facebook censorship controversy was the banning of female nipples in photos. That feels like a idyllic bygone era never to return.
The internet platforms will always make some mistakes, and it’s not fair to expect otherwise. And the task before Facebook, YouTube, Twitter, Instagram and others is admittedly herculean. No one can screen everything in the fire hose of content produced by users. Even if a platform makes the right call on 99 percent of its content, the remaining 1 percent can still be millions upon millions of postings. The platforms are due some forgiveness in this respect.
It’s increasingly clear, however, that at this stage of the internet’s evolution, content moderation can no longer be reduced to individual postings viewed in isolation and out of context. The problem is systemic, currently manifested in the form of coordinated campaigns both foreign and homegrown. While Facebook and Twitter have been making strides toward proactively staving off dubious influence campaigns, a tired old pattern is re-emerging — journalists and researchers find a problem, the platform reacts and the whole cycle begins anew. The merry-go-round spins yet again.
This week, a question from The New York Times prompted Facebook to take down a network of accounts linked to the Myanmar military. Although Facebook was already aware of the problem in general, the request for comment from The Times flagged specific instances of “seemingly independent entertainment, beauty and informational pages” that were tied to a military operation that sowed the internet with anti-Rohingya sentiment.
The week before, The Times found a number of suspicious pages spreading viral misinformation about Christine Blasey Ford, the woman who has accused Brett Kavanaugh of assault. After The Times showed Facebook some of those pages, the company said it had already been looking into the issue. Facebook took down the pages flagged by The Times, but similar pages that hadn’t yet been shown to the company stayed up.
It’s not just The Times, and it’s not just Facebook. Again and again, the act of reporting out a story gets reduced to outsourced content moderation.
By The Editorial Board The editorial board represents the opinions of the board, its editor and the publisher. It is separate from the newsroom and the Op-Ed section.
All information is provided on an as-is basis. Where we allow Bloggers to publish articles on our platform please note these are not our opinions or views and we have no affiliation with the companies mentioned