Wait until you see the company's critics if you think Facebook is full of dubious outrage bait

Both social media and the news media have a business model that exploits outrage for clicks.

Consider a business model in which people's fury is abused for clicks, where feelings like attachment and anger are valued to elicit, and where if people seem uninterested, you know you've done a lousy job.

Of course, this applies to both Facebook and the journalistic organizations that criticize it. Journalists are salivating on so-called whistleblower Frances Haugen's harmless revelations, and they want you to believe that this approach is exclusive to social media sites, possessed by the desire for riches that comes with growth. The Washington Post, for example, published a report this week on how Facebook's algorithms classify "angry" react emojis as more valuable than regular old "likes." This pushes "more emotional and provocative content into users' news feeds":

Starting in 2017, Facebook's ranking algorithm treated emoji reactions as five times more valuable than "likes," internal documents reveal. The theory was simple: Posts that prompted lots of reaction emoji tended to keep users more engaged, and keeping users engaged was the key to Facebook's business.

However, the Post and others omit to mention that when choosing which content other users see, Facebook algorithms prioritize the "love" emoji above the "like" emoji. "Angry" reactions were significantly less common than "love" reactions (11 billion clicks per week vs. 429 million). The algorithm was created to prioritize people who express strong emotions and show that type of content to others by assigning each post in a specific feed a score, which translates to placement in the news feed.

So, pretty similar to how the news media work: Online publications have strong incentives to write headlines and promotional materials that compel readers to click on their piece in a crowded marketplace, to prompt readers to spend as many minutes as possible actively engaged with the content. These basic incentives are at play for Facebook engineers designing algorithms. But the Post and others have treated these revelations as somehow explosive, portraying Zuckerberg as Frankenstein and Facebook as his monster. This narrative—that Facebook deliberately sows division in such a profound way that it ought to be regulated by Congress—is one with plenty of staying power. The media realized that when choosing how to format coverage of Russian interference and the Cambridge Analytica scandal back in 2016–2018. (Ironically, covering Facebook in such a negative way might drive traffic for some of these news sites.)

Favoring "controversial" posts—including those that make users angry—could open "the door to more spam/abuse/clickbait inadvertently," a staffer, whose name was redacted, wrote in one of the internal documents. A colleague responded, "It's possible."

The warning proved prescient. The company's data scientists confirmed in 2019 that posts that sparked angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news.

However, there are several items that may potentially emerge in someone's news feed and provoke reasonable outrage: recordings depicting police mistreatment of unarmed individuals; government suppression of protest movements such as those in Hong Kong; or discoveries about data breaches or government eavesdropping. In each of these situations, the strong reactions elicited may have contributed to the news item's virality.

Haugen told the British Parliament earlier this week that "Facebook has been unwilling to accept even a little sliver of profit being sacrificed for safety," and that "anger and hate is the easiest way to grow on Facebook." She should probably add that that's true for media organizations, too.

Follow us on Google News

Recent Search