Streaming services, including Apple Music, Spotify, Deezer, and Tidal, removed a song that used artificial intelligence to clone the singers' voices. YouTube also said it had removed the hit after receiving a takedown notice.
From their side, the Universal Music Group released a statement condemning the “infringing content created with generative AI.” The group criticized the song and said it violated copyright laws.
Growing Concerns Over the Impact of AI on the Music Industry
Although the song, (Heart On My Sleeve), was made with a deep fake voice, it went viral, and people liked it before it was pulled from the streaming services.
The content creator, known as Ghostwriter, claimed the song was created by software trained on the musicians' voices.
It was first released by the TikTok user @ghostwriter977 and credited as Ghostwriter on streaming services, where it gained hundreds of thousands of streams. The track was believed to feature AI-generated fake vocals from two superstars: Drake and The Weekend. The creator confirmed the claim and said the trick was accomplished by using artificial intelligence. However, it is still unclear if the entire song was generated using AI software or just the vocals part.
And although the song was a work of AI, several music fans seemed impressed. Billboard reported some comments from the TikTok content before it got pulled from the platform. A fan said it was “the first AI song that has actually impressed me.” Another user said Ghostwriter was “putting out better Drake songs than the superstar himself.” Another fan said that “AI was getting dangerously good.” Meanwhile, the creator wrote on YouTube: “This is just the beginning.”
As the increasing success of “Heart On My Sleeve” raised growing concerns over the use and impact of AI on the music industry, the Universal Music Group urged streaming platforms to block AI companies from accessing the services’ songs to “train their machines and software.” Previously, last month, a large coalition of industry organizations warned that AI should not be used to replace human artistry.
According to Billboard, the representatives for the two superstars declined to comment, and the Universal Music Group said the streaming platforms have a “fundamental legal and ethical responsibility to prevent the use of their services in ways that harm artists.”
For the group, the training of generative AI using artists' music and the availability of infringing content created using AI begs the question as to which side of history all stakeholders in the music ecosystem want to be on the side of artists and creative expression, or the side of deep fakes and fraud.
Universal Music Group said it has been doing its innovation related to AI. “We’re encouraged by the engagement of our platform partners on these issues – as they recognize they need to be part of the solution,” their spokesperson said.
Following this case, an intellectual property lawyer told the BBC that the copyright law and artificial intelligence were not straightforward as U.K. law provides singers with certain rights over their performances, including making copies of recordings. However, a “deepfake voice, which does not specifically copy a performance, will most likely not be covered.” It could be a protected work in its own right, according to the expert.
“Current legislation is nowhere near adequate to address deepfakes and the potential issues in terms of intellectual property and other rights.”
Tony Rigg, a lecturer in music industry management at the University of Central Lancashire and music industry advisor, said: “The use of AI in the music industry is a double-edged sword, with tensions arising from its potential to undermine the value of human creativity, juxtaposed with its potential to augment it.”
The Deep Fake Audio: The New Era of AI
The term deepfake refers to the underlying technology of deep learning, a form of AI that can teach itself how to solve problems based on given large sets of data. “It is used to swap faces in the video and digital content to make realistic-looking fake media.”
Artificial intelligence does not only imitate humans’ faces and facial expressions, but also their voices. Technology companies have recently been focusing on generating unique human voices using AI software to imitate real-life speeches full of emotions.
Our previous blog about deep fake audio explained how AI voice synthesis could help in entertainment product making. But can be used to spread fake news as well. Artificial Intelligence allows the manipulation of sounds and vocals.
The technology can generate non-existent human voices using emotional effects, as it can also imitate others’ voices based on a given database using machine learning algorithms.
Audio deepfake seems to be a new way to spread fake news, especially with this number of vocals circulating that most fact-checkers could face difficulties while trying to debunk, especially as there are no available and affordable tools to identify the AI-generated vocals.