Facebook Parent Meta Impacted Palestinians’ Human Rights, Report Says – CNET [CNET]

View Article on CNET

What’s happening

Meta released a report that shows how the social media giant impacted human rights in the Israeli-Palestinian conflict in May 2021.

Why it matters

Content moderation in languages outside of English has been an ongoing challenge for social media companies. Meta is making changes in response to the findings.

Facebook’s parent company Meta made content moderation mistakes that impacted the human rights of Palestinians during an outbreak of violence that happened in the Gaza Strip in May 2021, a report released Thursday shows.

Meta asked consulting firm Business for Social Responsibility (BSR) to review how the company’s policies and actions affected Palestinians and Israelis after an Oversight Board that examines some of the social media company’s toughest content moderation decisions recommended the company do so.

The report showed that Meta’s actions removed or reduced the ability of Palestinians to enjoy their humans rights “to freedom of expression, freedom of assembly, political participation, and non-discrimination.” It also underscores the ongoing challenges the company faces when it comes to moderating content in languages outside of English. Meta owns the world’s largest social network Facebook, photo-and-video service Instagram and messaging app WhatsApp.

BSR said in the report that it spoke to affected stakeholders and many shared “their view that Meta appears to be another powerful entity repressing their voice.” 

The findings outline several content moderation errors Meta made amid the Israeli-Palestinian conflict last year. Social media content in Arabic “had greater over-enforcement,” resulting in the company mistakenly removing posts from Palestinians. BSR also found that the “proactive detection rates of potentially violating Arabic content were significantly higher than proactive detection rates of potentially violating Hebrew content.”

Hebrew content experienced “greater under-enforcement” because Meta didn’t have what’s known as a “classifier” for “hostile speech” in that language. Having a classifier helps the company’s artificial intelligence systems automatically identify posts that likely violate its rules. Meta also lost Hebrew-speaking employees and outsourced content moderation. 

Meta also falsely pulled down content that didn’t violate its rules. The human rights impact of “these errors were more severe given a context where rights such as freedom of expression, freedom of association, and safety were of heightened significance, especially for activists and journalists,” the report stated.

The report also pointed out other major content moderation mistakes on Meta’s platforms. For example, Instagram briefly banned #AlAqsa, a hastag used to reference the Al-Aqsa Mosque in Jerusalem’s Old City. Users also posted hate speech and incitement to violence against Palestinians, Arab Israelis, Jewish Israelis and Jewish communities outside the region. Palestinian journalists also reported that their WhatsApp accounts were blocked.

BSR, though, didn’t find intentional bias at the company or among Meta employees but did find “various instances of unintentional bias where Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users.”

Meta said it’s making changes to address the problems outlined in the report. The company, for example, said it will continue to develop and deploy machine learning classifiers in Hebrew.

“We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content,” said Meta’s Director of Human Rights Miranda Sissons in a blog post.