Scott Olson | Getty Images
Social media firms have responded to allegations of “shadow banning” their users for Palestinian-related content amid the conflict in Gaza, saying that the implication that Big Tech “deliberately and systemically suppress a particular voice” is false.
They have been accused of blocking certain content or users from their online communities since the onset of the war between Israel and Hamas which started in October.
Queen Rania Al Abdullah of Jordan, for example, criticized major platforms for allegedly limiting Palestinian-related content about the war.
“It can be nearly impossible to prove that you have been shadow-banned or censored. Yet, it is hard for users to trust platforms that control their content from the shadows, based on vague standards,” Queen Rania told the Web Summit in Doha.
They have been criticized for relying too heavily on “automated tools for content removal to moderate or translate Palestine-related content,” according to a Human Rights Watch report on the subject.
Hussein Freijeh, the vice president of MENA for Snapchat, told CNBC’s Dan Murphy at Web Summit Qatar last week that these firms have “a really important role to play in the region.”
“We have all the algorithms in place to moderate the content,” Freijeh added, saying the platform also uses a “human component to moderate that content to make sure that it’s safe for our community.”
As an information war plays out online between pro-Palestinian and pro-Israeli narratives, platforms like Snapchat, and Meta-owned Instagram and Facebook have become a key source for users seeking content and information about the conflict.
Foreign journalists are not allowed to report from the besieged Gaza Strip, blocking coverage from international media outlets. Journalists have begged Israel to rethink access, arguing that on-the-ground reporting is “imperative.”
Middle East depends on social media
The Middle East is one of the youngest regions in the world, and according to a UNESCO report from 2023, “young people in the Middle East and North Africa region now get their information from YouTube, Instagram and Facebook.”
According to the OECD, more than half the population (55%) of the Middle East and North Africa is under 30, and nearly two-thirds rely on social media for news.
Dozens of Instagram users, who preferred to keep their identities private, have reported to CNBC that posts or stories, which include ground footage of the war in Gaza or social commentary by Palestinian or pro-Palestinian voices, received less engagement than other posts of theirs not related to the war.
Those same users reported that posts often take longer to be seen by followers, or are sometimes skipped in a sequence of stories. The users have also reported to CNBC that some posts were deleted by Instagram and were told that such posts failed to follow “community guidelines.”
One Instagram user told CNBC that the alleged “shadow banning” on their account and others in their network didn’t begin on Oct. 7, saying they saw a limitation of content in previous iterations of violence between Israelis and Palestinians, namely during the forced removal of families in the East Jerusalem neighborhood of Sheikh Jarrah in 2021. CNBC has not independently verified these claims.
Meta also rolled out a “fact-checking” function on Instagram in December of last year, increasing speculation that the social media site was censoring certain content.
A Human Rights Watch report on Meta’s alleged censorship, published in December 2023, found that “the parent company of Facebook and Instagram has a well-documented record of overbroad crackdowns on content related to Palestine.”
The report added: “Meta’s policies and practices have been silencing voices in support of Palestine and Palestinian human rights on Instagram and Facebook in a wave of heightened censorship of social media.”
The report documented over 1,000 “takedowns” of content from Instagram and Facebook platforms from over 60 countries between October and November of 2023.
A Meta spokesperson told CNBC the HRW report “ignores the realities of enforcing our policies globally during a fast-moving, highly polarized and intense conflict, which has led to an increase in content being reported to us.”
“Our policies are designed to give everyone a voice while at the same time keeping our platforms safe. We readily acknowledge we make errors that can be frustrating for people, but the implication that we deliberately and systemically suppress a particular voice is false.”
Speaking more broadly, the Meta spokesperson told CNBC that “Instagram is not intentionally limiting people’s stories reach,” and that it does “not hide/deprioritize posts from a user’s followers based on whether a hashtag tagged to the post is blocked.”
In addition, Meta uses “technology and human review teams to detect and review content that may go against our Community Guidelines. In instances where we recognize that a decision has been inaccurate, we will restore the content.”
Meta also told CNBC that “given the higher volumes of content being reported to us, we know content that doesn’t violate our policies may be removed in error.”
Credit: Source link