Censorship by Facebook| Meta Content Moderation Policies Under Fire| Human Rights Watch

Censorship by Facebook| Meta Content Moderation Policies Under Fire| Human Rights Watch

Raza Rumi discusses Meta's content moderation policies in the context of the Israel-Palestine conflict.

In a comprehensive report released by Human Rights Watch (HRW), Meta, the parent company of popular social media platforms Facebook and Instagram, faces scrutiny over its content moderation policies, specifically in the context of the Israel-Palestine conflict. The findings, based on extensive research and data analysis, reveal a complex web of challenges that impact user freedoms and demand urgent attention.

One of the key concerns highlighted in the HRW report is the lack of transparency surrounding government requests for content removal. Meta, in compliance with its Community Standards and local laws, regularly takes down content. However, the report points out that a substantial amount of content removal is prompted by "voluntary" requests from governments. These requests, often originating from non-judicial bodies like law enforcement authorities, sidestep legal procedures and lack the transparency and accountability inherent in legal processes.

The report singles out the Israeli government's proactive approach to seeking content removal from social media platforms. The Israeli Cyber Unit, operating within the State Attorney’s Office, consistently issues "voluntary" removal requests to Meta. Notably, compliance rates with these requests have been consistently high, reaching 92% in 2018. In 2021, the Cyber Unit issued 5,990 removal or restriction requests to Meta platforms, with an 82% compliance rate.

Recent data, as of November 14, 2023, indicates a surge in content takedown requests related to the Israel-Palestine hostilities. The prosecutor’s office reportedly sent 9,500 content removal requests to major social media platforms, with nearly 60% directed at Meta. Despite the considerable volume of requests, there is a lack of transparency regarding the specific policies violated and the outcomes of these requests.

Automation emerges as a significant factor contributing to the challenges outlined in the report. Meta heavily relies on automated tools for content moderation, with over 90% of content deemed violative being proactively detected by these tools. However, the report underscores the limitations of automation in understanding contextual factors, leading to the removal of non-violative content. Users reported instances where their peaceful comments on the Israel-Palestine conflict were erroneously flagged and removed.

The human rights implications of content censorship, especially in the Palestine context, are a focal point of the report. Article 19 of the International Covenant on Civil and Political Rights (ICCPR) guarantees the right to freedom of expression, which extends to online spaces. The report argues that unduly restricting or suppressing peaceful content supporting Palestine infringes on this fundamental right.

Content restrictions and "shadow banning," where a user’s content becomes less visible without explanation, are flagged as particularly distressing for users. The report notes that Meta does not formally acknowledge the practice of shadow banning, leaving users in the dark and without adequate access to complaint mechanisms and remedies.

HRW's report concludes with a series of recommendations for Meta, urging an overhaul of the Dangerous Organizations and Individuals (DOI) policy, increased transparency on government requests, improved automation transparency, and enhanced access to remedies for users facing content removal. The report underscores the responsibility of social media companies to align their content moderation policies with international human rights standards, ensuring a more transparent, accountable, and rights-respecting digital space.