AFTER A SERIES of Israeli airstrikes against the densely populated Gaza Strip earlier this month, Palestinian Facebook and Instagram users protested the abrupt deletion of posts documenting the resulting death and destruction. It wasn’t the first time Palestinian users of the two giant social media platforms, which are both owned by parent company Meta, had complained about their posts being unduly removed. It’s become a pattern: Palestinians post sometimes graphic videos and images of Israeli attacks, and Meta swiftly removes the content, providing only an oblique reference to a violation of the company’s “Community Standards” or in many cases no explanation at all.

Not all the billions of users on Meta’s platforms, however, run into these issues when documenting the bombing of their neighborhoods.

Previously unreported policy language obtained by The Intercept shows that this year the company repeatedly instructed moderators to deviate from standard procedure and treat various graphic imagery from the Russia-Ukraine war with a light touch. Like other American internet companies, Meta responded to the invasion by rapidly enacting a litany of new policy carveouts designed to broaden and protect the online speech of Ukrainians, specifically allowing their graphic images of civilians killed by the Russian military to remain up on Instagram and Facebook.

No such carveouts were ever made for Palestinian victims of Israeli state violence — nor do the materials show such latitude provided for any other suffering population.

“This is deliberate censorship of human rights documentation and the Palestinian narrative.”

“This is deliberate censorship of human rights documentation and the Palestinian narrative,” said Mona Shtaya, an adviser with 7amleh, the Arab Center for the Advancement of Social Media, a civil society group that formally collaborates with Meta on speech issues. During the recent Israeli attacks on Gaza, between August 5 and August 15, 7amleh tallied nearly 90 deletions of content or account suspensions relating to bombings on Meta platforms, noting that reports of censored content are still coming in.

Marwa Fatafta, Middle East North Africa policy manager for Access Now, an international digital rights group, said, “Their censorship works almost like clockwork — whenever violence escalates on the ground, their takedown of Palestinian content soars.”

Instances of censored Palestinian content reviewed by The Intercept include the August 5 removal of a post mourning the death of Alaa Qaddoum, a 5-year-old Palestinian girl killed in an Israeli missile strike, as well as an Instagram video showing Gazans pulling bodies from beneath rubble. Both posts were removed with a notice claiming that the imagery “goes against our guidelines on violence or dangerous organizations” — a reference to Meta’s company policy against violent content or information related to its vast roster of banned people and groups.

Meta spokesperson Erica Sackin told The Intercept that these two posts were removed according to the Dangerous Individuals and Organizations policy, pointing to the company’s policy of censoring content promoting federally designated terrorist groups. Sackin did not respond to a follow-up question about how an image of a 5-year-old girl and a man buried in rubble promoted terrorism.

Palestinians in Gaza who post about Israeli assaults said their posts don’t contain political messages or indicate any affiliation with terror groups. “I’m just posting pure news about what’s happening,” said Issam Adwan, a Gaza-based freelance journalist. “I’m not even using a very biased Palestinian news language: I’m describing the Israeli planes as Israeli planes, I’m not saying that I’m a supporter of Hamas or things like these.”

RIGHTS ADVOCATES TOLD The Intercept that the exemptions made for the Russia-Ukraine war are the latest example of a double standard between Meta’s treatment of Western markets and the rest of the world — evidence of special treatment of the Ukrainian cause on Meta’s part since the beginning of the war and something that can be seen with media coverage of the war more broadly.

Though the majority of users on social platforms owned by Meta live outside the United States, critics charge that the company’s censorship policies, which affect billions worldwide, tidily align with U.S. foreign policy interests. Rights advocates emphasized the political nature of these moderation decisions. “Meta was capable to take very strict measures to protect Ukrainians amid the Russian invasion because it had the political will,” said Shtaya, “but we Palestinians haven’t witnessed anything of these measures.”

Meta’s public-facing Community Standards rulebook says: “We remove content that glorifies violence or celebrates the suffering or humiliation of others because it may create an environment that discourages participation” — noting a vague exception for “graphic content (with some limitations) to help people raise awareness about these issues.” The Violent and Graphic Content policy places a blanket ban on gruesome videos of dead bodies and restricts the viewing of similar still images to adults 18 years and older.

In an expanded, internal version of the Community Standards guide obtained by The Intercept, the section dealing with graphic content includes a series of policy memos directing moderators to deviate from the standard rules or bring added scrutiny to bear on specific breaking news events. A review of these breaking news exceptions shows that Meta directed moderators to make sure that graphic imagery of Ukrainian civilians killed in Russian attacks was not deleted on seven different occasions, beginning at the immediate onset of the invasion. The whitelisted content includes acts of state violence akin to those routinely censored when conducted by the Israeli military, including multiple specific references to airstrikes.

According to the internal material, Meta began instructing its moderators to deviate from standard practices to preserve documentation of the Russian invasion the day after it began. A policy update on February 25 instructed moderators to not delete video of some of the war’s earliest civilian casualties. “This video shows the aftermath of airstrikes on the city of Uman, Ukraine,” the memo reads. “At 0.5 seconds, innards are visible. We are making an allowance to MAD this video” — a reference to the company practice “Mark As Disturbing,” or attaching a warning to an image or video rather than deleting it outright.

“It’s always been about geopolitics and profit for Meta.”

On March 5, moderators were told to “MAD Video Briefly Depicting Briefly Mutilated Persons Following Air Strikes in Chernigov”— again noting that moderators were to deviate from standard speech rules. “Though video depicting dismembered persons outside of a medical setting is prohibited by our Violent & Graphic Content policy,” the memo says, “the footage of the individuals is brief and appears to be in an awareness raising context posted by survivors of the rocket attack.”

The graphic violence exceptions are just a few of the many ways Meta has quickly adjusted its moderation practices to accommodate the Ukrainian resistance. At the outset of the invasion, the company took the rare step of lifting speech restrictions around the Azov Battalion, a neo-Nazi unit of the Ukrainian military previously banned under the company’s Dangerous Individuals and Organizations policy. In March, Reuters reported that Meta temporarily permitted users to explicitly call for the death of Russian soldiers, speech that would also normally violate the company’s rules.

Rights advocates emphasized that their grievance is not with added protections for Ukrainians but the absence of similar special steps to shield besieged civilians from Meta’s erratic censorship apparatus nearly everywhere else in the world.

“Human rights is not a cherry-picking exercise,” said Fatafta. “It’s good they have taken such important measures for Ukraine, but their failure to do so for Palestine emphasizes further their discriminatory approach to content moderation. It’s always been about geopolitics and profit for Meta.”

HOW EXACTLY META decides which posts are celebrating gruesome wartime death and which are raising awareness of it is never explained in the company’s public overview of its speech rules or the internal material reviewed by The Intercept.

A January 2022 blog post from Meta notes that the company uses a “balancing test that weighs the public interest against the risk of harm” for content that would normally violate company rules but provides no information as to what that test actually entails or who conducts it. Whether an attempt to document atrocities or mourn a neighbor killed in an airstrike is deemed glorification or in the public interest is left to the subjective judgment calls of Meta’s overworked and sometimes traumatized content contractors, tasked with making hundreds of such decisions every day.

Few would dispute that the images from Ukraine described in the Meta policy updates — documenting the Russian invasion — are newsworthy, but the documents obtained by The Intercept show that Meta’s whitelisting of material sympathetic to Ukraine has extended even to graphic state propaganda.

The internal materials show that it has on multiple instances whitelisted Ukrainian state propaganda videos that highlight Russian violence against civilians, including the emotionally charged “Close the Sky” film Ukrainian President Volodymyr Zelenskyy presented to Congress in March. “Though the video depicting mutilated humans outside of a medical setting is prohibited by VGC policy the footage shared is in an awareness-raising context posted by the President of Ukraine,” said a March 24 update distributed to moderators.

On May 13, moderators were told not to delete a video posted by the Ukrainian Defense Ministry that included graphic depictions of burnt corpses. “The video very briefly depicts an unidentified charred body lying on the floor,” the update says. “Though video depicting charred or burning people is prohibited by our Violent & Graphic Content policy … the footage is brief and qualifies for a newsworthy exception as per OCP’s guidelines, as it documents an on-going armed conflict.”

“Meta is replicating online some of the same power imbalances and rights abuses we see in the real world.”

The internal materials reviewed by The Intercept show no such interventions for Palestinians — no whitelisting of propaganda designed to raise sympathies for civilians or directives to use warnings instead of removing content depicting harm to civilians.

Critics pointed to the disparity to question why online speech about war crimes and human rights offenses committed against Europeans seems to warrant special protections while speech referring to abuses committed against others do not.

“Meta should respect the right for people to speak out, whether in Ukraine or Palestine,” said Shakir, of Human Rights Watch. “By silencing many people arbitrarily and without explanation, Meta is replicating online some of the same power imbalances and rights abuses we see in the real world.”

While Meta seems to side against allowing Palestinian civilians to keep graphic content online, it has intervened in posting about the Israeli-Palestinian conflict to keep images live by siding with the occupying Israeli military. In one instance, Meta took steps to ensure that a depiction of an attack against a member of the Israeli security forces in the occupied West Bank was kept up: “An Israeli Border Police officer was struck and lightly wounded by a Molotov cocktail during clashes with Palestinians in Hebron,” an undated memo distributed to moderators reads. “We are making an exception for this particular content to Mark this video as Disturbing.”