The threat of deepfakes is growing as the AI tools to create them become widely accessible. There was a 245% increase in deepfakes worldwide from 2023 to 2024, an uptick spurred in part by coming election cycles, according to verification provider Sumsub. It’s also affecting the corporate sector; a recent Business.com survey found that 10% of companies have faced deepfake-aided fraud, like cloned voices.

The trend, unsurprisingly, has been a windfall for companies marketing tools to defend against deepfakes and technologies used to produce them. One of those companies, Pindrop, on Wednesday announced that it secured a $100 million, five-year loan from Hercules Capital, which CEO Vijay Balasubramaniyan says is being earmarked for product development and hiring.

“With advancements in generative AI, voice cloning in particular has become a powerful tool,” Balasubramaniyan told TechCrunch. “Deepfake detection leveraging AI detection technologies is now needed by every call center to stay one step ahead of fraudsters.”

Pindrop builds deepfake-combating and multi-factor authentication products targeting businesses in banking, finance and related industries. The company claims its tools can ID contact center callers, for example — and with higher accuracy than rival solutions.

“Pindrop leverages a dataset of over 20 million utterances, both synthetic and genuine, to train the AI models to differentiate between genuine human voices and synthetically generated ones,” Balasubramaniyan said. “We’ve also trained over 330 text-to-speech (TTS) models to help identify TTS models used to create the deepfake.”

Bias is a common problem in deepfake detection models. Many audio models tend to be biased toward recognizing Western, American voices and perform poorly with different accents and dialects, which could lead a detector to classify a legitimate voice as a deepfake.

It’s up for debate whether synthetic training data — training data generated by AI models themselves — mitigates or exacerbates the biases. For what it’s worth, Balasubramaniyan thinks the former and claims that Pindrop’s voice authentication products focus on the “acoustic and spectro-temporal features” of voices as opposed to pronunciation or language.

“AI-based voice recognition systems tend to display biased results towards tonal, accent, vernacular and dialect differences which could have racial implications,” Balasubramaniyan said. “These biases are derived from homogeneity of the data that is used to train the systems that may lack in representation of various ethnic, racial, gender or other differences, thus limiting the variety of data that the AI systems are trained on.”

Regardless of its products’ efficacy, Pindrop has made substantial inroads since 2011, when Balasubramaniyan, an ex-Googler, founded the company with former Barracuda Networks chief research officer Paul Judge and Mustaque Ahamad. To date, the Atlanta-based firm, which employs around 250 people, has raised $234.77 million in venture capital from investors including Vitruvian Partners, CapitalG, IVP and Andreessen Horowitz.

Asked why Pindrop opted for debt as opposed to equity this time around, Balasubramaniyan said that it was an “attractive option” to “efficiently raise growth capital” without diluting Pindrop’s equity. (That’s a common strategy.)

The proceeds from the loan will enable Pindrop to bring its tech to new sectors, Balasubramaniyan added, such as healthcare, retail, media and travel.

“With the advent of generative AI, we are seeing a massive demand for our solutions worldwide, and plan to look at expanding into countries that are seeing significant threats due to deepfakes,” Balasubramaniyan said. “Pindrop is positioned to help companies protect themselves and their consumers from rising fraud and deepfake threats with fraud prevention, authentication and liveness solutions.”



Source link

Share.
Leave A Reply

© 2024 The News Times UK. Designed and Owned by The News Times UK.
Exit mobile version