Following the announcement of Pavel Durov's arrest, the founder of the Telegram messaging application, at a French airport, and the suspension of the X platform in Brazil, global opinions have become sharply polarized. The first headline emphasized freedom of expression, while the second focused on national security concerns. To understand the rationale behind these actions, it is essential to revisit the background of the conflict between governments and major social media companies.
The situation is rooted in Donald Trump's 2015 declaration of his presidential candidacy and his connection with Cambridge Analytica, a political consulting firm that significantly impacted his 2016 campaign. The firm utilized social media data as part of its services, collecting and leveraging this information to influence public opinion in various ways, including supporting Trump's campaign.
Cambridge Analytica Used Facebook Data to Influence American Electorate
In 2014, Cambridge Analytica collaborated with Alexander Kogan, a Moldovan-born American researcher, who conducted a survey with approximately 270,000 Americans online, paying $5 to each participant.
The company’s primary goal was not to collect the survey results but to exploit the Facebook login feature to access sensitive user data, such as Likes and personal information, which was later used for political purposes in the U.S. elections. After gathering the data, Kogan developed a psychological measurement model, dividing users into hypothetical groups based on their personal traits, such as likes, dislikes, irritants, and purchasing habits, among other factors. In addition to the initial 270,000 participants, Cambridge Analytica obtained data from 87 million additional users who were friends of the survey participants, amounting to about a quarter of Facebook’s users in the United States.
This information allowed the company to target American voters with political content and advertisements specifically designed to influence their opinions, benefiting Donald Trump in the presidential election he won at that time.
In 2018, the U.S. Department of Justice and the Federal Bureau of Investigation (FBI) announced an investigation into Cambridge Analytica, marking the first major rift between social media companies and governments.
Mark Zuckerberg, Facebook’s CEO, publicly apologized on live TV, describing the incident as a breach of trust. In response, Facebook quickly updated its data protection systems to ensure investor confidence and announced the beginning of implementing the European Union’s General Data Protection Regulation (GDPR).
Extremist Groups Use Social Media for Propaganda and Recruitment
In 2014, while Donald Trump's team was leveraging data breaches to win the presidency, another group was using similar tactics to advance ISIS’s agenda. The emergence of ISIS triggered unprecedented shock and awe. Their advertising campaign spread rapidly across platforms like Facebook, YouTube, and Twitter within hours before being removed. Despite the platforms' efforts to delete violent content and ISIS symbols, the campaign's impact was significant. It effectively promoted the idea that life in the Islamic State was preferable to life in the West, targeting European families.
Social Media Platforms Experienced Increased Censorship in 2016
In 2016, Twitter (now X) announced the suspension of 325,000 accounts promoting ISIS content. Despite these measures, ISIS turned to the Telegram messaging application as a secure alternative and relied heavily on it for global communication and propaganda.
ISIS chose Telegram because the application offers encrypted communication and supports absolute free speech, along with a comprehensive user data protection policy. It also features a self-destruct option, allowing messages to be automatically deleted, which makes it extremely difficult for Telegram or any external entity to access users' conversations.
As a result, the application gained significant popularity in countries with restrictive freedom of expression, such as Russia and Iran. It also became a refuge for illegal activities, including drug trafficking, money laundering, and more.
Numerous Examples of Criminal Activity on Telegram
Telegram has become an attractive platform for criminal activity due to its privacy features. It supports groups of up to 200,000 members and allows content uploads of up to 2 gigabytes. These features enable individuals or entities to influence public opinion similarly to mainstream media.
Telegram faced intense criticism following the 2015 terrorist attacks in France and the 2016 attacks in Brussels, as ISIS used the platform to share videos of the operations. The criticism intensified in 2017 when it was revealed that Telegram was the primary communication tool for "lone wolf" attacks in Europe led by ISIS. Further criticism arose after a video was posted on the platform that year, containing instructions for a terrorist attack in Spain.
Questions have been raised about the platform's responsibility, with suggestions that attackers might have used temporary phones if not for Telegram. Unlike mobile networks that maintain user data and respond to government requests for tracking tools to detect threats, Telegram operates with minimal oversight. For instance, Facebook employs 15,000 content moderators and uses artificial intelligence to monitor content, while Telegram has fewer than 100 employees.
Despite this, Pavel Durov, Telegram's founder, has occasionally responded to government requests, shutting down accounts linked to Hamas at the request of the Israeli government and other accounts based on a recent request from the British government.
The second significant rift between social media platforms and governments occurred in the United Kingdom in 2016. Widespread public exposure to misinformation about Islam and refugees on Twitter influenced political opinions in rural areas and contributed to support for Brexit.
The European Union and the Misinformation Law
In 2018, the European Union introduced the Misinformation Law to regulate standards for combating misleading information, recognizing its significant impact on cohesion and security in Europe.
The next major rift between social media platforms and governments occurred in 2019 with the outbreak of the COVID-19 pandemic. During this time, the global threat became evident as more than 100 million posts contributed to a massive influx of misinformation. This misinformation appeared to originate from Russian campaigns and "electronic armies," including automated accounts known as "bots." The challenge then became determining how researchers and authorities could identify whether a posting account was a bot.
Detecting Fake Accounts: Essential Tools for Identification
A significant issue arose with changes implemented by social media companies. For instance, Twitter previously had tools designed to detect fake accounts. However, after Elon Musk acquired the platform, one of his initial steps was to prioritize profit. This led to a ban on third-party access to Twitter data, affecting research institutions and tech companies that developed tools like Botometer, which were used to identify whether a user was genuine or a fake account.
Russian Strategy for Spreading Misinformation
Significant changes have arisen due to increasing reports of foreign misinformation campaigns. Several investigations have revealed that many misleading narratives about COVID-19 originated from Russia. Other reports indicate that Russia established thousands of social media accounts impersonating Americans to support pro-Moscow policies and disseminate fabricated articles and false information.
It is noteworthy that Russia, the largest country in the world by land area, is home to over 190 ethnic groups speaking more than 100 languages. Within this diverse context, Russian intelligence relies heavily on policies focused on security and information control.
Sources suggest that this policy aims to maintain national stability by monitoring internal opposition and promoting nationalist narratives that unify the population. Simultaneously, Russia spreads narratives internationally to influence global public opinion, enhance its geopolitical position, and highlight the failures of Western systems.
From a Western perspective, Russia's efforts are seen as global propaganda aimed at supporting its ruling regime, consolidating state control over information, and directing it to serve its interests. With rising tensions between Russia and the West over the Ukraine conflict, Russia’s strategy, from a Western viewpoint, aims to destabilize Western societies by targeting shared values and experiences that form the social fabric.
Through social media, Russia has developed methods to target Western communities by creating accounts that impersonate Western citizens and promote conspiracy theories and misinformation that undermine trust in Western systems. According to Western sources, this strategy seeks to create internal divisions and foster suspicion, potentially leading some individuals to adopt views contrary to their national interests.
European Digital Services Act (DSA) to Combat Misinformation
In 2022, the European Union enacted the Digital Services Act (DSA) to regulate digital services, focusing on consumer protection and rights. The DSA enhances safeguards against illegal content and misinformation, boosts transparency in digital advertising and algorithms, and strengthens user control over online experiences.
Elon Musk’s Impact on X Platform
Recent reports indicate that Elon Musk’s acquisition of Twitter and his focus on increasing profits have led to the removal of many content restrictions and a reduction in moderation. Musk has reinstated accounts previously banned for spreading misleading or inflammatory content. Additionally, the substantial cut in Twitter's staff has opened the platform to various entities, including global intelligence agencies, using electronic armies to influence public opinion.
Despite Musk’s advocacy for absolute free speech, recent reports accuse him of double standards in his management of X. This includes actions taken against journalists who reported on him, such as a tech reporter from The New York Times and Taylor Lorenz of The Washington Post.
U.K. Disruptions and Violence Due to Misinformation Campaigns
Recent riots and disturbances erupted in several British cities, nearly a month after the European Union announced that X (formerly Twitter) was not in compliance with the Digital Services Act. These disturbances stemmed from a large-scale misinformation campaign on X targeting refugees and Muslim communities in the U.K.
The misinformation campaign heightened tensions and led to attacks on Islamic religious sites and refugee centers in Britain. These incidents underscore the risks associated with misinformation on social media, particularly amid reduced content moderation and delays in implementing effective measures to curb the spread of inflammatory content.
Regulating Social Media’s Political Impact
In the United States, both Donald Trump and Elon Musk have faced significant scrutiny following the spread of misleading information and doctored images using deepfake technology involving Vice President Kamala Harris, who is also a current election candidate.
Additionally, false information about Algerian boxer Imane Khelif has circulated. These incidents have sparked debate over the role of influential figures and social media platforms in spreading misinformation and its impact on public opinion.
They also underscore the challenges regulators face in addressing deepfake technology, which is increasingly used to disseminate false information and influence public opinion, complicating efforts to combat fake news in the digital age.
After Founder’s Arrest, Telegram Users Shift To Other Platforms
Less than a month after riots in the U.K., French authorities arrested Pavel Durov, founder of Telegram. An EU spokesperson clarified that the European Union was not involved in the arrest, noting that Telegram's 45 million active users are below the threshold for full compliance with EU digital platform regulations. The French president also denied any political motives behind the arrest, emphasizing that the matter will be decided by the court.
Durov’s arrest caused a sharp drop in the value of Telegram's cryptocurrency, reflecting diminished confidence in the platform. Following the arrest, global hacking group Lapsus announced its members would move to the Jabber platform. They expressed concerns that Durov might cooperate with French authorities by providing data on Telegram users with arrest warrants in exchange for his personal freedom.
Influencers and Public Figures Support Telegram Founder
In light of the rapidly evolving events, Pavel Durov's arrest, and recurring legal issues, pressure has increased on Elon Musk, owner of X. Musk realizes that what happened to Durov could potentially extend to him in the future, particularly due to the volume of misinformation disseminated through his platform and the facilitation X provides to electronic armies.
Durov's arrest quickly became a global issue tied to freedom of expression. Musk tweeted, "#FreePavel." Meanwhile, right-wing influencer Andrew Tate used the situation to promote his own views. Durov's arrest in France was not an isolated incident but reflects a growing global trend toward regulating social media platforms in line with national security interests.
Recent Developments in Social Media Platforms’ Confrontations With Governments
- Recently, the European Commission determined that X does not comply with its social media regulations.
- In the U.K., the government issued a warning to Elon Musk following recent incidents of violence and tensions, as well as Musk’s public criticism of the British Prime Minister and government actions.
- In Brazil, Musk defied a court order to close 100 accounts spreading misinformation, resulting in a ban on X in the country and the loss of access to Brazil’s market, which includes about 45 million users.
- Conversely, X complied with the Indian government’s request to close accounts linked to protesters from the farmers' movement opposing President Modi’s administration.
- X has been banned in Pakistan.
- Venezuelan President Nicolás Maduro has ordered the closure of X in Venezuela.
Global Shift Toward Broader Social Media Regulation
The debate over balancing free speech with controlling misinformation has become a key issue in discussions about social media platforms. While platforms like Telegram offer a venue for free expression in repressive regimes, they face significant challenges in preventing misuse, as highlighted by the recent French arrest warrant for Pavel Durov.
These events signify a global shift towards stricter regulation of social media, reflecting a trend in international realism that views global politics as a constant struggle for power and interests. Countries are increasingly focusing on controlling the internet and social media as vital components of national security.
As the digital landscape evolves, the integration of cyber armies with artificial intelligence has led to the rise of sophisticated automated tools that contribute to misinformation and illegal activities, such as advanced bots. With globalization waning and a multipolar world emerging, nations are intensifying efforts to safeguard their digital security, making social media central to modern geopolitical conflicts.
However, state control and official oversight of social media may not be sufficient to manage misinformation or prevent its use in illegal activities. Additionally, government restrictions can sometimes undermine freedom of expression and democracy.
In response to these challenges, there is a growing focus on developing new policies to combat misinformation and regulate social media. These policies include increasing the role of civil society organizations, strengthening independent fact-checking platforms, enhancing media literacy campaigns on algorithms, and training students to identify misinformation. Proposals also suggest enacting more balanced laws that reconcile free speech with the need to address misinformation on digital platforms.
Read More
Trump vs. Harris: Unpacking the Most Misleading Claims on the Israeli War on Gaza
AI-Generated Lies: Deepfakes, Hallucinations, and Misinformation