• Home
  • Science
  • What you’re hearing may not be real: The AI ​​voice tool is being abused

What you’re hearing may not be real: The AI ​​voice tool is being abused

It was only a matter of time before AI-generated voice initiatives became the game of internet trolls. The beta version of ElevenLabs, the artificial intelligence-based conversation startup founded by former Google and Palantir employees, has been abused.
 What you’re hearing may not be real: The AI ​​voice tool is being abused
READING NOW What you’re hearing may not be real: The AI ​​voice tool is being abused
It was only a matter of time before AI-generated voice initiatives became the game of internet trolls. The beta version of ElevenLabs, an artificial intelligence-based conversation initiative founded by former Google and Palantir employees, is on the agenda with abuse.

4chan members used ElevenLabs to make fake voices of Emma Watson, Joe Rogan, and other celebrities saying racist, transphobic, and violent things. The company recently made a statement on Twitter, reporting that “the number of cases of abuse of voice cloning has increased” and that they are trying to resolve the issue by implementing additional security measures.

Fake voices of celebrities used in racist rhetoric

The clips uploaded to 4chan are mainly about celebrities, but the risks of “deepfake” sound clips are just beginning, given the high quality of the sounds produced and the obvious ease with which people create them. In the audio clips created using the beta tool of ElevenLabs, a voice created exactly like the famous actress Emma Watson reads a passage from Mein Kampf (My Struggle). In another instance, a voice very similar to Ben Shapiro utters racist remarks. Another example is Rick Sanchez’s violent rhetoric against Morty in Rick & Morty. (Justin Roiland, who voices Rick & Morty, was recently accused of severe domestic violence)

The clips range from harmless to violent, from transphobic to homophobia and racism. A 4chan post with a wide variety of clips also included a link to the beta version of ElevenLabs, suggesting that ElevenLabs’ software may have been used to create the sounds. ElevenLabs offers both “speech synthesis” and “voice cloning” features on its official website. For audio cloning, ElevenLabs creates a clone of the corresponding audio from a clean sample recording that is longer than a minute.

It’s getting harder to believe what we see and hear online

Perhaps this emergence of “deepfake” sound clips should come as no surprise, since we saw a similar phenomenon occur a few years ago. Advances in artificial intelligence and machine learning have been used to produce fake videos of celebrities.

When we say fake videos, fake voices, fake gestures, the things we see and hear on the internet are getting away from reality. Of course, these technologies are not developed for these purposes. For example, on the official website of ElevenLabs, it mentions target usage areas such as audio newsletters, reading audiobooks and video. At this point, Edgar Allan Poe said, “Believe only half of what you see, none of what you hear.” rhetoric is coming.

Comments
Leave a Comment

Details
162 read
okunma33514
0 comments