facebook
© TechCrunch
Facebook has reportedly told staffers to "steel [themselves]" against "more bad headlines" in the "coming days" amid the ongoing publication of critical news articles citing internal reports leaked by whistleblower Frances Haugen.

In a memo sent on Saturday, which was seen by Axios, Facebook Vice-President of Global Affairs Nick Clegg apparently warned employees that the news coverage would likely contain "mischaracterizations of our research, our motives and where our priorities lie." He instructed them to "listen and learn from criticism when it is fair, and push back strongly when it's not."

"But above all else... We should keep our heads held high and do the work we came here to do," Clegg reportedly told staffers in the memo. The warning came as reports emerged about how the company does not understand how its own algorithms work and struggles to combat hate speech, among other failings.

Frances Haugen Facebook
© Jabin Botsford/Pool by way of APFrances Haugen, Facebook "whistleblower"
Earlier this month, Frances Haugen, a former product manager in Facebook's civic misinformation team, testified in US Congress that the company prioritized "profit over safety," does not take adequate precautions to ensure the safety of its 2.9 billion users, and downplays the harm it can cause to society. She said it has repeatedly misled both investors and the public.

The thousands of pages of internal memos Haugen leaked offer a glimpse into the tech giant's inner workings. According to several documents, employees had spoken out about how its central algorithms reward outrage and hate speech. A June 2020 memo noted that it was "virtually guaranteed" that key Facebook's systems show "systemic biases based on the race of the affected user" - suggesting that content from some racial backgrounds are being prioritized.

Another note by a research group in March 2021 showed that Facebook acts on as few as between 3% and 5% of cases relating to hate speech and about 0.6% of cases involving violent content. Because it is "extraordinarily challenging" for algorithms to understand language context, one memo countered Facebook's claims about its artificial intelligence programs' ability to spot hate speech and abuse. It said that the platform would struggle to remove anything beyond 10-20% of such content.

By 2019, however, Facebook had apparently decided to rely more on AI content moderation and began to reduce funding for human oversight when it came to hate speech. According to the documents, things came to a head during the January 6 Capitol riots when the company allegedly struggled to curb the spread of hate speech and misinformation on its platform.

Leaked memos by Haugen detail how Facebook had turned off some emergency safeguards too soon after the November 2020 election, only to then rush to switch these back on after Trump supporters stormed the US Capitol. The hate-speech problem is apparently much worse in non-English-speaking countries - with one document revealing that in 2020 the company only devoted 13% of its budget for developing misinformation detection algorithms to combating threats outside the US.

However, Clegg wrote in his memo that Facebook had invested $13 billion and employed over 40,000 people to "do one job: keep people safe on Facebook." That claim was repeated by the tech giant's spokesperson Joe Osborne, who told the Financial Times that the "premise" behind the critical reportage was "false."

"Yes, we're a business and we make profit, but the idea that we do so at the expense of people's safety or well-being misunderstands where our own commercial interests lie," Osborne said.

Roughly two-dozen media outlets have apparently accessed the materials leaked by Haugen, who is set to testify before the UK Parliament on Monday. Her leaks have been trumpeted by critics of Facebook, who accuse it of being too slow and too restrained in quashing harmful content on its platform.

Last week, Facebook Vice-President of Communications John Pinette accused the news outlets of an "orchestrated 'gotcha' campaign" and said the company would "correct the record" since a "curated selection" of documents would not be able to "draw fair conclusions" about it.