` `

How Has Israel’s Use of AI Impacted the Lives and Narratives of Palestinians?

Misbar's Editorial Team Misbar's Editorial Team
News
5th March 2024
How Has Israel’s Use of AI Impacted the Lives and Narratives of Palestinians?
AI is increasingly used against Palestinians and their narratives (Getty)

On February 22, 7amleh, the Arab Center for the Advancement of Social Media, published a position paper on the 'Impacts of AI Technologies on Palestinian Lives and Narratives.' Below are the main points it contained.

The Impact of Ai-Generated Content on Palestinians

The Center underlined in its article the dangers of using AI to generate misleading and/or propaganda content for cultural or political expression, potentially impacting cultural production, freedom of expression, and public opinion.

According to the paper, there is a noticeable pro-Israeli narrative bias in such content due to biases coded into the data sets collected from websites. This results in the reproduction of existing internet biases and their consolidation, spreading them more widely.

As an example, stickers depicting children holding weapons appeared simply by using the words 'Palestinian' or the phrase 'Muslim Palestinian child' to create AI-generated stickers on WhatsApp, a Meta-owned app.

Large Language Models (LLM), including ChatGPT and Bard, also demonstrated tendencies to show selective, misleading, or censored information when queried about Palestine.

A report by Motherboard, a tech magazine, previously exposed Adobe's involvement in selling fake pictures and images about Gaza and Israel, generated using AI.

The Center emphasized that media outlets monitoring anti-Palestinian AI bias are crucial in raising awareness about the dangers of these biases and advocating for the development of ethical AI using diverse and fair data sets.

While some users and activists use AI-generated content to show their support for a given side, the inability of recipients, who are unaware of artificial intelligence, to distinguish fake content makes the latter capable of unwittingly inducing them into error. This is compounded by the fact that tools to detect AI-generated content are still not 100% reliable.

The proliferation of AI-generated content could also contribute to a constant state of distrust, casting doubt on the veracity of real images and pictures.

According to the classifications of the Business, Human Rights, and Technology Project established by the Office of the High Commissioner for Human Rights, content generated using artificial intelligence can impact Palestinian human rights and narratives. This impact includes causing physical and psychological harm, entrenching stereotypes, negativity and/or biases, manipulating public opinion, and restricting freedom of expression.

AI-Enhanced Content Moderation Threatens Palestinian Content

The paper also cautions about the impact of biases in data sets on the transparency of AI-enhanced content moderation processes, currently adopted by Meta, LinkedIn, and TikTok platforms. The 7amleh Center documented violations against content supporting the Palestinian cause and noted an observed trend of excessive moderation of Arabic content, in contrast to the limited management of Hebrew content on Meta platforms.

Furthermore, the paper warns that failure to address this discrimination could lead to its expansion, resulting in reduced transparency and accountability of social media companies. This could broaden the suppression of freedom of expression concerning Palestinian narratives, involving partial blocking of reach and deletion of posts and accounts.

The paper highlights the unknown role of AI-enhanced content management technologies in enforcing government laws imposed on social media platforms, such as Israel's anti-terrorism law. This potential usage may transform social media platforms into tools for censoring opposition speech and suppressing freedom of opinion and expression.

AI-enabled algorithms, influenced by potentially inaccurate or biased data, alter the chances of content appearing in recommendation systems. This exposes users to the risk of manipulation and diminishes their ability to access accurate and unbiased information and knowledge.

Impact of AI Tech Use on Surveillance Systems of Palestinians

Artificial intelligence technologies are used to create and develop governmental and non-governmental surveillance systems, often resulting in human rights violations.

The mission of these technologies is to monitor, track, and trace individuals by collecting as much sensitive data as possible, including facial recognition information, communication history, social media algorithms, and surveillance camera footage. Israel is one of the pioneers in this field, with one of its most prominent programs being “Pegasus,” capable of remote installation on smartphones. Additionally, facial recognition technologies are used at military checkpoints.

AI Technology Used To Automate the Israeli War on Palestinians

Israel tends to incorporate artificial intelligence technologies into its weaponry, with some now capable of identifying targets and firing autonomously. Deliberately tested on Palestinians, these technologies are then marketed as weapons proven in real-world contexts.

Previously, Israel used artificial intelligence-powered drones for targeting Palestinians in the West Bank, alongside the 'Smart Shooter' self-firing rifle. In its war on Gaza, an automated targeting system was employed, selecting targets based on their perceived involvement in the fighting.

This raises ethical concerns due to the inherent weaknesses of artificial intelligence, casting doubt on its ability to distinguish civilians from combatants. The replacement of human decision-making with artificial intelligence also results in a shift of responsibility from humans to machines, allowing humans to evade accountability.

Read More

Experts Concerned About the Malicious Use of AI

Nabulsi Roundabout Massacre: Evidence Points To Israeli Occupation Responsibility