` `

Experts Concerned About the Malicious Use of AI

Ouissal Harize Ouissal Harize
Technology
21st July 2023
Experts Concerned About the Malicious Use of AI
Hackers can use AI to make their phishing more compelling (Getty)

In the rapidly evolving digital landscape, artificial intelligence (AI) has emerged as a powerful tool, revolutionizing various industries and aspects of human life. However, with every technological advancement, there are both positive and negative consequences. According to Sami Khoury, the head of the Canadian Centre for Cyber Security, cybercriminals have already adopted AI to engage in malicious activities such as hacking, phishing, and spreading disinformation online. This alarming revelation adds urgency to the growing concerns surrounding AI's potential misuse by rogue actors.

The Infiltration of AI in Cybercriminal Activities

As reported by Reuters, Khoury highlighted that AI is being utilized in several ways to enhance the effectiveness of cyberattacks. One significant area is in crafting phishing emails, where AI enables cybercriminals to compose personalized and convincing messages that deceive recipients into revealing sensitive information or clicking on malicious links. By leveraging AI, hackers can make their phishing attempts more targeted and sophisticated, increasing the chances of success.

Moreover, AI is employed in the development of malicious code, paving the way for more potent and evasive cyber threats. The constantly evolving nature of AI models makes it challenging for security experts to stay ahead and effectively counter these emerging cyber threats. As the capabilities of AI continue to evolve, cybercriminals are presented with new and sophisticated methods to exploit vulnerabilities.

The Dark Side of Language Processing Models

Large language models (LLMs) are a prime example of AI technology contributing to cybercriminal activities. LLMs, like OpenAI's ChatGPT, have demonstrated the ability to craft highly realistic dialogue and documents by drawing on vast amounts of text data. This makes them potential tools for impersonating individuals or organizations convincingly, even with minimal knowledge of the English language.

In response to the potential risks posed by AI-generated content, several cyber watchdog groups have issued warnings. They caution that LLMs could be leveraged to carry out cyber attacks beyond the current capabilities of cybercriminals. The British National Cyber Security Centre expressed concern about the potential misuse of LLMs, and the European police organization Europol published a report highlighting the risks of impersonation using these language processing models.

AI for Malicious Intent

The integration of AI with malicious intent is not merely theoretical but has already become a reality. According to Reuters, a former hacker recently discovered an LLM that had been trained on malicious content. He challenged the LLM to draft a persuasive email to deceive someone into making a cash transfer. In response, the LLM crafted a compelling three-paragraph email, urgently requesting the target's assistance with an "important" payment to be made within 24 hours. This incident demonstrates the potential for AI-generated content to fool even cautious individuals.

The Ongoing Battle Against AI-Powered Cyber Threats

While the use of AI in crafting malicious code is still in its nascent stages, experts are deeply concerned about the rapidly evolving capabilities of AI models. As AI technology progresses, it becomes increasingly challenging to predict the full extent of its malicious potential before it is deployed by cybercriminals. The dynamic nature of AI development makes it imperative for cybersecurity professionals to constantly adapt and develop innovative strategies to defend against these evolving threats.

Misbar’s Sources:

Reuters

Verdict

Medium

Read More

A Chinese Man Falls Victim to an Imposter's Deepfake

Warning Over Online Misinformation Ahead of Spanish Election

Most Read