The American newspaper Intercept reported that Facebook had approved to publish on its platform a series of advertisements that incite murder and violence against Palestinians.
The report underlined that these advertisements were presented in both Arabic and Hebrew, flagrantly violating Facebook policies and those of its parent company, Meta. Some of these advertisements contained explicit calls for violence, advocating a “holocaust for the Palestinians” and the elimination of “the women, children and the elderly of Gaza”. Some other posts described Gaza children as “future terrorists."
Experimental Advertisements To Test Facebook Content Moderation
The “7amleh” group, affiliated with the Arab Center for Social Media Development, worked on experimental advertisements on Facebook and presented them to The Intercept to expose the issue. The advertisements presented by “7amleh” bypassed the moderation imposed by the platform and it agreed to publish them.
“Approving these advertisements is the latest in a series of failures by Meta towards the Palestinian people. Throughout this crisis, we witnessed an ongoing pattern of clear bias and discrimination against Palestinians,” said Nadeem Al-Nashif, the founder of the “7amleh” group.
The idea of “7amleh” to test Facebook's machine learning moderation system emerged last month, when Al-Nashif discovered a Facebook advertisement explicitly calling for the assassination of an American activist Paul Larudee, one of the founders of the Free Gaza Movement. "It is time to assassinate Paul Larudee, this anti-Semitic terrorist and human rights defender from the United States," Facebook's automatic translation of the text read. Facebook reported the advertisement, which was subsequently removed.
An Israeli Right-Wing Group Published Inciting Advertisements
The advertisement was published by “Ad Kan”, an Israeli right-wing group founded by former Israeli army and intelligence officers to combat “anti-Israel organizations," which are financed by allegedly antisemitic sources.
Calling for the assassination of a political activist is considered a violation of Facebook's advertising rules. The appearance of Ad Kan's sponsored post on the platform indicates that Facebook approved it, despite those rules. The advertisement likely passed through Facebook's automated filtering, which is based on machine learning, facilitating the rapid execution of its global advertising operations.
Meta Uses Algorithmic Moderation To Remove Arabic Posts
Facebook claims it has implemented a system designed to review all advertisements before publication. The company has increasingly relied on automated text scanning programs to enforce speech rules and moderation policies following criticism of its human system, which primarily relied on the labor of external contractors.
According to The Intercept, these techniques allow the company to avoid business issues associated with human moderators while also obscuring the decision-making process behind secretive algorithms.
An external review commissioned by Meta last year revealed that although the company frequently utilized algorithmic censorship to remove posts in Arabic, it lacked a similar algorithm to identify “hostile speech," such as racist content and violent incitement. Following the audit, Meta asserted that "it had 'introduced a classification tool for offensive Hebrew speech' aimed at proactively detecting the most offensive Hebrew content."
Incitement to Violence on Facebook
As the ongoing Israeli war on the Palestinians in Gaza continues, Al-Nashif said that he is disturbed by the explicit call in the advertisement to assassinate Larudee. He was worried that similar paid services might participate in the violence against Palestinians.
The Intercept points out that the widespread incitement to violence which moves from social media platforms to the real world is not just a hypothetical. In 2018, United Nations investigators found that violent incitement posts on Facebook played a decisive role in the Rohingya genocide in Myanmar.
The quick removal of the post regarding Paul Larudee did not explain how the advertisement was approved in the first place. In light of Facebook's assurances on the safeguards, Al-Nashif and “7amleh," who officially cooperate with Meta on issues of moderation and freedom of expression, were puzzled.
“We learned from what happened to the Rohingya in Myanmar that Meta had a track record of not doing enough to protect marginalized communities, and that its advertising management system was particularly weak,” Al-Nashif said.
Meta Fails the 7AMLEH Test
The company's Community Standards Guidelines, which advertisements are supposed to comply with in order to be approved, prohibit not only text that calls for violence, but also any statements that dehumanize people based on race, religion, or nationality. However, confirmation emails shared with The Intercept reveal that Facebook approved each advertisement.
Although “7amleh” informed The Intercept that the organization had no intention of actually running these advertisements and would have removed them before they were scheduled to appear, they still believe that the approval demonstrates that the platform is fundamentally misguided regarding non-English speech – languages that are used by numerous users.
The “7amleh” center also tested the same advertisements in Arabic, and as with Hebrew, all Arabic advertisements were approved as well, similar to a Facebook profile called “Migrate Now,” which posts advertisements calling on “Arabs in Judea and Samaria” to immigrate to Jordan “before it is too late.” “This is what “7amleh” considered to be coded language and a clear use of intimidation, which should have no place on Meta’s platforms.” 7amleh noted that “Meta should not benefit financially from groups that run these types of hate advertisements.”
Facebook spokeswoman Erin McPike confirmed that the advertisements were approved by mistake, and said in statements reported by The Intercept, “Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes.” She added, “This is why advertisements can be reviewed multiple times, including once they are published."
Approving These Advertisements Underlines a General Problem
According to Facebook documents, automated software-based screening is the “primary method” used to approve or reject advertisements. However, it remains unclear whether the "hostile speech" algorithms used to detect violent or racist posts are also used in the advertisement approval process.
In its official response to last year's audit, Facebook stated that "the new classification tool for Hebrew will significantly improve its ability to handle significant increases in abusive content, such as related to outbreaks of conflict between Israel and Palestine."
However, based on the experience of “7amleh," this classifier either does not work well, or for some reason, is not used to screen advertisements.
Regardless, the fact that these advertisements were approved points to a general problem, Meta claims it can effectively use machine learning to deter explicit incitement to violence, when it clearly cannot, Al-Nashif said.
Al-Nashif added, “We know that Meta’s Hebrew classifiers are not working effectively, and we have not seen the company respond to almost any of our concerns.” He further explained, “Because of this inaction, we feel that Meta may bear at least partial responsibility for some of the harm and violence that Palestinians are suffering from in the real world."
Arab activists on Facebook and Instagram have long faced challenges, experiencing blockage or deletion of their pro-Palestinian posts, which intensified during the recent war on the Gaza strip.