Fake News on Social Networks

Manton Reece wrote a nice post last week about Fake news on Instagram:

Twitter has retweets. Facebook has sharing. But Instagram has no built-in reposting. On Instagram, there’s no instantaneous way to share someone else’s post to all of your followers. […]

When you have to put a little work into posting, you take it more seriously. I wonder if fake news would have spread so quickly on Facebook if it was a little more difficult to share an article before you’ve read more than the headline. […]

Instagram was no accident. The only question: was it unique to photos, or can the same quality be applied to microblogging?

I don’t think it’s unique to photos, thankfully! As Manton describes, conscious decisions by Instagram encouraged certain behaviours above others, and I think you can do that no matter what your social network’s primary content is. Let’s look at this in a bit more detail.

First, it’s important to get to the real meat of what’s going on when we talk about “fake news.” This is distinct from just inaccurate information (although that’s a part of it). What’s going on is really disinformation, better known as propaganda.

Aside from the normal reasons propaganda exists, it exists on social networks like Facebook and Twitter because it can exist on those networks. It’s profitable and useful for the parties manufacturing and disseminating it. To Facebook and Twitter, upon whose networks it propagates, it doesn’t really matter what the information is so long as it engages users. Facebook’s apathy to propaganda is regularly exploited.

Design Around It

So how could Facebook, Twitter, or a microblog network prevent it? The obvious first step is to use the tools which already work.

Facebook prohibits nudity on its platform and seems to have tools to defend against it (some combination of user flagging and automated tools). It should do the same for propaganda. Yes, it’s harder to recognize than nudity, but that’s not an excuse for not doing it. This is a starting point.

The next step is to design the interface to prevent it.

Maybe don’t let users retweet / share something with a link in it if they haven’t actually, you know, clicked the link. I bet this would be an easy win at curbing the spread of propaganda.

For anything that gets past that, give users tools to help them reason about the content they’re seeing. Do people routinely report this content as fake / propaganda? Show it. Who is sharing this and how often do they share propaganda? Show it.

Get readers to evaluate what they’re seeing and sharing. You read it, what did you think about it? Why? Let other readers evaluate those answers.

Your user interface can encourage thoughtfulness or it can encourage mindlessness. But that is choice you make when designing your interface.


See also:

Join the Discussion 👂🤔✍️

Jason Brennan
Hey look at that, some people are taking matters into their own hands, a context-providing BS detector.

B.S. Detector is a rejoinder to Mark Zuckerberg's dubious claims that Facebook is unable to substantively address the proliferation of fake news on its platform. A browser extension for both Chrome and Mozilla-based browsers, B.S. Detector searches all links on a given webpage for references to unreliable sources, checking against a manually compiled list of domains. It then provides visual warnings about the presence of questionable links or the browsing of questionable websites

Now imagine amplifying that with a network the size of Facebook.

See also Neil Postman’s Crap Detector (from his fantastic book on education).

Please read the Discussion Guidelines before replying.

☑️ Email me when someone replies.

Speed of Light