Holding Publishers Responsible for Fake News

by Daniel Saunders, November 2018.

In the Black Mirror episode, ‘Nosedive, Lacie lives in a world in which even her smallest social interactions are rated and figured into a score. Lacie engages in rigorous self-discipline to keep her score high but after a series of unfortunate events, she loses control, her score plummets and she is left a social outcast. Nosedive is a cautionary tale about the dystopian potential of reputational scoring run amuck. But do all scoring systems lead to the same place? Or, are there better and worse ways of designing them?

York University’s very own Regina Rini recently published a piece in the New York Times that suggests a reputational scoring system for social media platforms as a solution to fake news. Here’s the gist of her argument: in interpersonal settings, we have techniques for assessing the credibility of our friends based on our history with them. If I know my friend loves to embellish his stories, I’ll discount them accordingly.

But when we move from interpersonal settings to social media, our ability to judge reliability is hampered. People often follow hundreds of accounts whom they don’t know personally. We have neither the history with those friends nor the cognitive capacity to keep track of everyone’s reliability online.

Rini suggests social media platforms create a reputational tracking system that automates the type of recordkeeping we already employ in interpersonal settings.

This system would build on things social media platforms are already doing. Facebook allows users to report articles as fake. Reported articles are sent over to an independent fact checking organization and if they confirm it’s fake, that article gets flagged. If a Facebook user attempts to share a flagged article in the future, they get a warning, informing them the content is disputed but they are welcome to share it if they want.

Rini suggests Facebook begins scoring users based on how frequently they share fake news and then make those scores public. Imagine if there was a little green dot on posts from people who have good track records and those dots fade to red for those who don’t. This would give us a heuristic to know how reliably our digital friends really are.

Nearly everyone I’ve pitched Rini’s idea to has the same reaction: it sounds like a slippery slope to Black Mirror. I don’t think that’s quite right, but that reaction does point at a serious problem.

In ‘Nosedive’, we also meet Susan, a truck driver, who’s fed up with keeping up appearances and willingly allows her score to collapse. Although she suffers socially, she finds the freedom to say what’s on her mind liberating.

Many people express a similar sentiment in their rejection of political correctness discourse. They feel suffocated by an elitist liberal culture and align with right wing politicians who have the appearance of speaking candidly. They also happen to be the same segment of the population who most frequently shares fake news. When reputational scores are attached to persons, those users will feel they are being unfairly targeted by a conspiracy to silence dissent. Users may ignore Facebook’s warnings, believing they are sacrificing their reputation for a greater cause. Digital communities would likely emerge in which red dots are a badge of honor.

What I’m suggesting is that Facebook develops reputational scores for news platforms, rather than users…Offloading these scores onto publications makes the whole system more palatable – individual users won’t feel like they are being personally targeted, and the scores cannot become markers of social respectability.

This would congeal political disagreement into durable scores that stick with people for all to see. Black Mirror worries really kick in if reputational scores begin effecting people’s judgements about each other over more than just the news, potentially creating social isolation and intensifying partisan fragmentation further.

It’s a serious problem for Rini’s proposal. But it’s too hasty to throw the idea away. It’s very close to what I believe is the right solution.

Rini models her solution on a practice we already do offline. But we can tell a slightly different story about how we assess reliability during interpersonal interactions. When my friend makes a claim, I don’t have to judge it based only on her track record. I can also ask her where she read that. If she says the New York Times, I know that’s pretty reliable. If she says the New York Post, I’ll take it with a grain of salt.

Judging the track record of the source feels more or less natural depending on context. If my friend and I don’t talk politics very often, I may have an insufficient sample to assess her credibility. I certainly have a sense of how often some of my friends exaggerate their stories or passes on rumors without further investigation. But that intuition is calibrated by gossip and personal stories, not always the news. It’s possible to be good at evaluating some types of content but not others. The problem of fake news is really about assessing their skill in reading journalism and not their skill in gossiping. If I only care about the former skill, we need a more narrowly targeted solution.

What I’m suggesting is that Facebook develops reputational scores for news platforms, rather than users. Whenever fact checkers determine a platform published fake news, it lowers their score. These scores are visible and attached to news articles. If the New Yorker misrepresents information more frequently than the Wall Street Journal, we should know that. Offloading these scores onto publications makes the whole system more palatable – individual users won’t feel like they are being personally targeted, and the scores cannot become markers of social respectability.

There’s a legitimate worry that misinformation campaigns can just game this system by creating hundreds of new online platforms to carry out their operations. But there are ways to lower this risk. Newly created platforms should have lower scores by default. Platforms should be required to build their scores up by a history of good behavior. Platforms should score points for following establish practices in journalistic ethics, like publishing corrections and retractions when they make mistakes. Websites that syndicate their content from other sources should have a lower score to prevent misinformation campaigns from artificially inflating their platforms scores by copying factual content.

Human fact checkers are slow. But early machine learning experiments have shown promise in detecting fake news. There is an independent dystopian worry about outsourcing all fact checking to robots, but at least when it comes to stories which are in the process of becoming viral, AI can serve as a stop gap solution to rapidly decrease the score of those platforms attached to the fake viral content.

‘Nosedive’ isn’t chilling because it imagines a world of reputational tracking systems. It’s chilling because the scores are tied to individual people and are publicly visible. This kind of reputational tracking can be highly disruptive to social interaction. However, holding publications accountable allows users to reliably assess information in a complex digital landscape, without straining social relations. At any rate, something has to be done. Facebook and Twitter’s response to coordinated misinformation has been lackluster at best. The proposal I’ve sketched is a promising solution to one of the most pressing problems of our day.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s