Social media giant Facebook is inviting external auditors to review its community standards enforcement report, the company announced today, while announcing the sixth edition of the report. The company invited a Request for Proposal (RFP) from external auditors who will be conducting an “independent audit” of the metric Facebook uses for these reports.
The company said it “hopes” to begin the audit from 2021 and will let auditors publish their assessments of the reports too. So far, the company had been working with a group of “international experts” to figure out if the metric used for its community standards enforcement report were correct.
Further, the company said it took down 7 million pieces of harmful misinformation around covid-19 between April and June. This was accompanied by 98 million pieces of misinformation related to the pandemic that Facebook applied its warning labels on. The report cover 12 content review policies on Facebook and 10 on Instagram.
Facebook said it had sent its human moderators home in March this year, but has brought many of them back online (while working from home) since then. “We’ll continue using technology to prioritize the review of content that has the potential to cause the most harm,” the company said in a blog post.
The company added that it acted on fewer pieces of content around suicide, self-injury and child exploitative content, since it relies “heavily” on humans for moderating such content. “Despite these decreases, we prioritized and took action on the most harmful content within these categories,” the company claimed. “Our focus remains on finding and removing content while increasing reviewer capacity as quickly and as safely as possible,” it added.
The number of appeals it could take from users about content it took down was also lower because of the unavailability of human moderators, the company said.
In the meanwhile, Facebook claimed the detection rate of its AI algorithms improved for hate speech, from 89% in the last quarter to 95% this time. The company took down 9.6 million pieces of hate speech in Q1, which increased to 22.5 million in this quarter. It attributed this increase, in part, to its automation technologies, which Facebook used for Spanish, Arabic and Indonesian content too in this quarter. The detection rate for hate speech on Instagram increased from 45% to 84%.