As artificial intelligence technologies advance at an unprecedented pace, deepfake technology has emerged as a dangerous tool for spreading misinformation, particularly in the medical field. Sophisticated manipulations of images and voices of renowned doctors and medical experts are being turned into convincing videos. These fabricated materials are used to promote unverified products, jeopardizing patients' health and public trust in the healthcare system.
Health Fraud: How Trust in Doctors Is Exploited
In November, a fake video surfaced on Facebook featuring Professor Jonathan Shaw, deputy director of clinical and population health at Melbourne's Baker Heart and Diabetes Institute, endorsing a dietary supplement called Glyco Balance.
The fraudulent video exploited the professor's credibility to spread false claims about the product's ability to treat diabetes. This deception led some patients to abandon scientifically proven medications in favor of the unverified supplement.
One victim, Michael, a 79-year-old patient, spent $340 on the product, believing it was a safe alternative. He later discovered that the supplement had not undergone any scientific evaluation and realized he had fallen victim to a serious scam that destabilized his health.
The Therapeutic Goods Administration of Australia confirmed that the Glyco Balance supplement is not listed in the Australian Register of Therapeutic Goods and has not been evaluated for safety or efficacy. Health authorities have warned against replacing approved medications with unverified supplements, urging the public to verify medical information online, especially as reliance on digital platforms grows as a primary source of information.
Deepfake Deception: A Growing Global Threat to Medical Integrity
An investigation published by the British Medical Journal in July 2024 exposed the use of deepfake technology to produce fake videos featuring prominent U.K. doctors endorsing dubious products.
One target of this scheme was Dr. Hilary Jones, a well-known media doctor, who appeared in a fake video promoting a drug for treating high blood pressure during what appeared to be a segment on the "Lorraine" show.
Dr. Jones denied the video’s authenticity, emphasizing that numerous products—such as “cannabis gummies” and treatments for diabetes and hypertension—had exploited his name without his consent. He was not the only victim; other doctors, including the late Michael Mosley, were similarly targeted.
Retired physician Dr. John Cormack commented that fake videos have become a favored tool for medical fraudsters, noting that they are far less costly to produce than developing genuine products. “Rather than investing in research and development, scammers resort to creating highly convincing fake videos to deceive the public,” he said.
Weak Technical and Regulatory Response to Deepfake Challenges
Despite the serious risks posed by deepfake technology, efforts to combat it remain insufficient compared to the rapid pace at which the technology spreads. For instance, it took Meta nine days to remove a fake video attributed to Professor Jonathan Shaw. As part of a pilot program, the company has removed more than 8,000 fraudulent ads.
In February, Sir Nick Clegg, Meta’s president of global affairs, acknowledged the significant challenges the company faces in combating fake media, especially after prominent figures such as Joe Biden and Rishi Sunak were targeted by AI-generated content. Clegg said Meta is working on developing new tools for automatically detecting AI-modified images. However, the company faces substantial hurdles in applying these tools to videos and audio recordings.
He added that Meta has begun using identifiers to label AI-generated images, but this technology has yet to be implemented for videos or audio. Clegg emphasized that Meta is collaborating with other companies to develop more efficient technical solutions but faces numerous technical and commercial obstacles, particularly given the substantial revenue generated by fraudulent advertising content.
Health and Psychological Impacts of Medical Fraud
Medical fraud through deepfake technology not only causes direct harm to health but also erodes trust in the healthcare system. When patients realize they’ve been scammed, their reliance on credible medical advice diminishes, putting their health at greater risk.
Health experts emphasize that the issue extends beyond individual victims and affects the entire system. Dr. John Cormack, a retired physician and contributor to the British Medical Journal, said, “Deepfake technology presents a new challenge for doctors and healthcare institutions, as it has become alarmingly easy to falsify their statements and exploit their credibility for financial gain.”
How To Protect Patients From Digital Fraud
- Verify Information Sources: Cross-check content with trusted sources to confirm its authenticity.
- Examine Details: Look for discrepancies in videos, such as lip movements that do not align with the audio.
- Consult Healthcare Professionals: If a video features a well-known doctor, contact them directly to verify the content.
- Report Fake Content: Use the reporting tools available on social media platforms to flag fraudulent material.
By adopting these measures, both patients and healthcare providers can take proactive steps to mitigate the risks posed by digital fraud and safeguard trust in the medical community.
Read More
AI Tool Predicts Real-World Impact of Online Health Misinformation
Astrophysicist Neil deGrasse Tyson on Combating Science and Health Misinformation