Twitter’s feud with President Trump may end up costing the social networking company, Wall Street analysts say.
By moving to flag and fact check tweets from the president, the Jack Dorsey-run company is opening itself to intense scrutiny of its practices in ways that could force it to expand the 1,500-strong content moderation team policing its platform.
“Any time a tech company — whether it’s Apple, Microsoft, Facebook or Twitter — puts themselves in the spotlight, there’s a lot of challenges that come with that as well,” Wedbush analyst Dan Ives told The Post.
Ives pointed out how Facebook added more than 10,000 content moderators after it emerged that the company failed to crack down on Kremlin-backed sources seeking to sway the 2016 US presidential election, including by purchasing ads on the site.
“From a volume and user perspective, Twitter trying to do this with full accuracy would be like trying to count grains of sand at the beach,” Ives said.
Twitter last month announced that it would begin adding fact-checking labels to disputed or misleading tweets about the coronavirus amid fears that hoaxes were running rampant across the Internet. It then expanded these labels to include misleading content related to election integrity only after flagging a Trump tweet where he claimed that mail-in ballots are “substantially fraudulent.”
The label, which included a link telling users to “get the facts about mail-in ballots,” prompted allies of the president to question why similar warnings hadn’t been placed on other prominent Twitter users’ tweets, including a Chinese government spokesperson who accused the United States of causing the coronavirus.
Twitter only slaps misinformation labels on tweets it deems misleading if they have been generated by a public official. Everyone else just gets suspended or even banned from Twitter, depending on their track record.
But scores of public officials also use Twitter, including mayors, state officials, police officers, federal agencies — not to mention foreign leaders, experts noted.
“I think if you are starting to go down the road of fact checking, then presumably you’re going to need more fact checkers,” Third Bridge analyst Scott Kessler said. “This has been done before in various contexts and it definitely costs money.”
“For a company that seemingly is being pretty conservative in terms of the way that they’ve positioned themselves and they operate, this seems like it would be an additional obligation,” he added.
In a statement to The Post, a Twitter spokesperson said it does not foresee any additional hires to its team.
“We are staffed appropriately for the work. Protecting the public conversation is work that is done by teams across the company, including product, trust and safety, curation and Twitter Service,” the spokesperson said.
But analysts are skeptical given that Twitter’s content review operation is so much smaller than that of its competitors. Facebook, for example, has about 35,000 people working on “safety and security” — and Mark Zuckerberg has so far refused to follow in Twitter’s footsteps when it comes to scrutinizing posts by public officials.
In private comments from July 2019 leaked to The Verge, Zuckerberg questioned whether Twitter could expand it’s content moderation even if it wanted to.
“I mean, they face, qualitatively, the same types of issues. But they can’t put in the investment. Our investment on safety is bigger than the whole revenue of their company,” Zuckerberg said.
How large Twitter’s flagging process should be is not entirely clear in part because the operation is still a bit of a black box. Following the uproar tied to Trump, the company has said its decisions are made by a team of executives, including Twitter’s general counsel and vice president of Trust and Safety. Twitter CEO and founder Dorsey is informed before actions are taken.
Twitter has also revealed that its team was notified about Trump’s tweet by a third-party nonprofit partnering with its election integrity hub, and that the tweet ping-ponged between a slew of higher-ups until Dorsey eventually signed off on the flag.
When reached by The Post and asked for more details about its flagging system, a spokesperson pointed to a company blog post about exceptions to its rules and declined to give any additional information.
The blog post says that the only accounts that can break Twitter’s rules and not be suspended or deleted entirely are those from elected and government officials “given the significant public interest in knowing and being able to discuss their actions and statements.”
Twitter says that it evaluates every case “individually and in a way that accounts for context and history,” noting that “this is new territory for everyone” and that it has yet to set a precedent.
The San Francisco-based company also announced last June announced that it would begin flagging abusive tweets from world leaders. At the time, it pledged to use the tool on “rare” occasions when a tweet egregiously violates its policies.
With Post wires.