Meta Revises 'Dangerous' Designation, Censorship Of Arabic Word 'Shaheed'

Meta used to remove content whenever the word would be used for individuals designated under its Dangerous Organizations and Individuals policy

Meta Revises 'Dangerous' Designation, Censorship Of Arabic Word 'Shaheed'

Facebook's parent company, Meta and its Oversight Board on Tuesday published an opinion that the Arabic word "Shaheed" (martyr) should not qualify to be censored under its "Dangerous Organizations and Individuals" policy. 

In a post, the Oversight Board that Meta approached in February 2023 to review how it moderates the Arabic term "shaheed" for individuals deemed dangerous.

The board urged Meta to end its blanket censorship of content that used the term. Instead, it urged it to adopt a contextual moderation approach.
 
"The Board finds that Meta's approach to moderating content that uses the term "shaheed" to refer to individuals designated as dangerous substantially and disproportionately restricts free expression," the executive summary of the recommendation read, adding that up until now, Meta was interpreting all uses of the word "shaheed", referring to individuals it has designated as "dangerous" as violating and removed the content.

It added that Meta defines "shaheed" as an honorific term used in different communities to refer to someone who has died unexpectedly or honourably. Meta acknowledges the term has multiple meanings and is commonly translated to "martyr" in English.

It further noted that "shaheed" is likely to account for more content removals under the Community Standards than any other single word or phrase on its platforms. 

The board noted that acts of terrorist violence have severe consequences – destroying the lives of innocent people, impeding human rights and undermining the fabric of our societies. However, it said that any limitation on the freedom of expression to prevent such violence must be necessary and proportionate, given that undue removal of content may be ineffective and even counterproductive. 

The board found that Meta's current restrictive approach to moderation, based on concerns about how the word "shaheed" could be used to praise or approve of terrorism, has led to widespread and unnecessary censorship. Removing content for using "shaheed" disregards its linguistic complexity by treating it only as a "martyr." 
 
The term, however, has non-violating uses in areas such as news reporting and, most importantly, in discussions of terrorism, as well as to describe victims of violence. 
This blanket ban is leading to over-enforcement, which disproportionally impacts Arabic speakers and speakers of languages with "shaheed" loanwords. 

"The word "shaheed" is sometimes used by extremists to praise or glorify people who have died while committing violent terrorist acts. However, Meta's response to this threat must also be guided by respect for all human rights, including freedom of expression."

The board noted that it paused the publication of its advisory to review its usage in the context of Israel's war on the occupied territories of Palestine "to ensure its recommendations were responsive to the use of Meta's platforms and the word "Shaheed" in this context. The board also disclosed how Meta designates Hamas as a Tier 1 organisation under its obscure but layered Dangerous Organisations and Individuals policy.

"Shaheed" is a common term used by Muslim communities worldwide, as well as in regions where Arabic is commonly used. To better protect users, Meta should only remove the word "shaheed" when linked to signals of violence (like weapon imagery) or for breaking other rules (e.g. glorifying designated individuals). This would still keep harmful material off Meta's platforms but reduce over-enforcement and accidental removals of non-violating content. 

"These policies, enforced accurately, mitigate the dangers resulting from terrorist use of Meta's platforms. Accordingly, the board recommends that Meta end its blanket ban on the use of the term "shaheed" to refer to individuals designated as dangerous and modify its policy for a more contextually informed analysis of content, including the word" it recommended.