The discussion concerning artificial intelligence and its possible application in financial crimes intensified in China following an incident where a fraudster used advanced "deepfake" technology to deceive an individual and obtain 4.3 million yuan, about $622,000.
“Deepfake” Fraud in China: Over $600,000 Lost in a Scam
According to the authorities in Baotou, a city located in Inner Mongolia, the fraudster employed AI-powered face-swapping technology to impersonate the victim's friend during a video call, resulting in a transfer of 4.3 million yuan, about $622,000.
On April 20, a man named Guo, who serves as the legal representative of a technology company in Fuzhou, Fujian province, experienced a fraudulent incident. He received a video call via WeChat from an individual claiming to be his friend, seeking help.
The imposter "friend" explained that he was participating in a bidding process in another city and requested to use Guo's company's account to submit a bid of 4.3 million yuan. The imposter assured Guo that he would repay the amount promptly.
To provide evidence of the money transfer, the fraudster sent Guo a bank account number and a screenshot of the purported bank transfer voucher, indicating that the funds had been successfully transferred to Guo's company's account. Guo proceeded to transfer the requested amount of 4.3 million yuan in two separate payments to the provided account.
After completing the transfers, Guo became suspicious and decided to contact his actual friend for verification. To his surprise, Guo's friend denied making a video call to Guo or asking for any money to be transferred.
Realizing that he had fallen victim to a fraudster, Guo immediately contacted the Fuzhou police. Upon investigating, the authorities discovered that the fraudulent account was held at Mengshang Bank in Baotou. Consequently, the Fuzhou police promptly reached out to their counterparts in Baotou to intervene and halt the payment.
“The person on the other side didn't ask me to lend him money during the chat. He said that he would transfer the money first, and then what I needed to do was transfer his money to his company's account,” Guo said.
“He chatted with me via video call, and I also confirmed his face and voice in the video. That's why we let our guard down,” he added.
China's Police Warns of AI Fraud
The public security bureau of Baotou, Inner Mongolia autonomous region, is taking measures to raise public awareness about a form of fraud that relies on artificial intelligence to mimic faces and voices.
In light of this new form of AI fraud, the police have issued a warning to the public. Individuals are strongly advised to be cautious when it comes to sharing personal biometric data like facial images and fingerprints, as well as being cautious about readily disclosing identification and banking information.
Furthermore, the authorities emphasize the importance of verifying the identity of the other party involved by utilizing multiple communication channels, such as making phone calls, before proceeding with any fund transfers. The police also urge people to report any suspected fraudulent activities promptly. It serves as a reminder for the public to remain vigilant and take the necessary precautions to safeguard themselves against such scams.
China's AI Debate Intensifies with the Rise of “Deepfake” Fraud
The use of "deepfake" technology in a financial scam in northern China has raised concerns about the potential role of artificial intelligence (AI) methods in facilitating such illicit activities.
As reported by Reuters, the incident triggered a significant discussion on the microblogging platform Weibo regarding the risks posed to online privacy and security. The hashtag "#AI scams are exploding across the country" gained considerable attention, accumulating over 120 million views on Monday.
The discussions on Weibo highlighted the alarming realization that scammers can exploit various forms of media, including photos, voices, and videos, to carry out fraudulent activities. One user expressed concern over whether information security regulations can effectively keep pace with the techniques employed by these individuals.
The widespread engagement and concern expressed on Weibo reflect the growing awareness of the potential threats associated with AI-driven scams, raising important questions about the ability of existing security measures to counteract evolving fraudulent practices.
Earlier this month, Chinese authorities took their first enforcement action under a groundbreaking artificial intelligence law, arresting a man accused of utilizing ChatGPT to create a fabricated news article about a train crash.