R. Lance Hill, the original “Road House” screenwriter, is suing Metro-Goldwyn-Mayer Studios and Amazon Studios. He claims they ignored his copyright claims to the 1986 screenplay.
Hill, under his pen name David Lee Henry, says, “United Artists attained the 1986 grant from Hill well after the screenplay had been completed,” challenging the rights Amazon claims through MGM.
Why Is Amazon Being Sued?
The core of the lawsuit is the alleged use of artificial intelligence. Hill accuses Amazon of using AI to replicate actors’ voices during the SAG-AFTRA strike, rushing to complete the “Road House” remake before his copyright reclamation. Amazon refutes, stating, “The film does not use any AI in place of actors’ voices,” and labels Hill’s lawsuit as “completely without merit.”
What’s at Stake for the Film?
The dispute has cast a shadow over the remake’s release. With its debut scheduled for March 21 on Amazon Prime Video, the legal battle has sparked wider industry debates.
Director Doug Liman expressed disappointment, opting to boycott the film’s premiere due to disagreements over its distribution, highlighting tensions between traditional cinema and streaming platforms.
What Are the Larger Consequences For This?
The case raises many diiferent questions about copyright law, AI technology, and actor rights within Hollywood.
Hill’s attorney, Marc Toberoff, noted, “Hill had neither an employment nor a contractual relationship with United Artists when he wrote the screenplay,” emphasising the importance of authorial rights. The outcome could influence future dealings between writers, studios, and emerging technologies.
What’s Next for the Legal Battle?
The entertainment industry watches closely while this case unfolds. Amazon maintains, “We look forward to defending ourselves against these claims,” standing firm against Hill’s allegations. The resolution of this case could reshape how creative content is produced and protected, which could be a memorable moment in film history.
What Is a Voice Deepfake?
A voice deepfake uses artificial intelligence to create a fake voice recording. This technology analyses characteristics like pitch and cadence from real voice samples and then generates new audio that sounds like the original person.
Tools like Microsoft’s VALL-E can mimic a human voice from just a three-second clip. “This model should be trained with voice recordings of the speaker,” experts explain, highlighting the technology’s precision.
How Is a Voice Deepfake Made?
Creating a voice deepfake involves using text-to-speech technology. First, the original voice is recorded and divided into samples. These are then fed into a neural network which learns to imitate the voice’s unique features.
Finally, a generative model uses this data to create new, fake recordings. “The process for the generation of this type of deepfake,” notes Elvira Carrero from Mobbeel, “involves analysing small audio samples and learning to imitate the original voice.”
How Can You Spot a Voice Deepfake?
Speech Clarity:
- Deepfake voices might slur words. This happens because the AI struggles to create words and phrases the real person never said.
- “Deepfake voices may slur certain words,” notes Siwei Lyu, a researcher from the University of Buffalo.
Natural Tone:
- The voice may lack appropriate emotion for the context. It might sound real but flat and dry.
- Lyu advises, “Check whether the voice carries the correct emotion for the situation.”
Background Noise:
- Deepfake recordings often have extra noise, such as static or high-frequency cracking sounds.
- “Listen carefully for background noises that shouldn’t be there,” suggests Lyu.
Fullness:
- Deepfakes might sound less ‘full’ compared to real voices, particularly at higher frequencies.
- Lea Schönherr from Ruhr University Bochum found, “Deepfake audio might lack the range of frequencies found in natural human speech.”
Content and Context:
- Evaluate whether what’s being said is unusual or unexpected.
- Experts advise considering if the voice is asking for things like money or password.
Deep-Learning Algorithms:
- These algorithms analyse unique voice characteristics that are hard for deepfakes to replicate accurately.
- Researchers recommend using deep-learning models for more detailed voice analysis.