ElevenLabs
At ElevenLabs, ensuring our systems are developed, deployed, and used safely is our priority. Over the past year, our AI voices have supported a wide range of uses: from narrating audio books and voicing news articles, to animating video game characters, assisting in film pre-production, enabling language localization in entertainment, and creating dynamic voiceovers for social media and advertising. They’ve also given back voices to those who have lost them and aided individuals with accessibility needs in their daily lives. We believe these examples highlight what’s best about AI's transformative potential but to fully realize it, we must address and mitigate the associated risks. We see AI safety as inseparable from innovation and detail our safety measures below:
When creating a Profession Voice Clone we require users complete a voice captcha mechanism for a Professional Voice Clone to be produced. If there’s a match, your request is sent for fine-tuning. If they are all invalid, you’ll have to reach out via our help center to have your voice verified manually.
To increase transparency, last June we released the AI Speech Classifier, which allows anybody to upload audio samples for analysis as to whether the sample was AI-generated audio from ElevenLabs. Our goal is to help prevent the spread of misinformation by making the source of audio content easier to assess.
We use technical safeguarding solutions to try and detect and identify problematic and harmful content that is against our terms of service - like hate speech, self-harm, sexual abuse of minors fraud and scams on our platforms. We are continuously expanding our text moderation efforts and actively testing new ways to counteract cases of misuse, such as the creation of political content that could either affect participation in the democratic process or mislead voters.
While our terms already prohibit using our platform to impersonate or harm others, we are taking the added measure of introducing a ‘no-go voices’ safeguard. This safeguard is designed to detect and prevent the creation of voice clones that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the UK. We are working to expand this safeguard to other languages and election cycles. We also aim to continually refine this measure through practical testing and feedback.
We are inviting fellow AI companies to collaborate with us on establishing a comprehensive method for AI content detection. If you're interested in either a partnership or an integration, please connect with us at legal@elevenlabs.io.
If you come across content which raises concerns, and you believe it was created on our platform, please report it here.
In accordance with our terms and community rules we take action as is appropriate based on the violation which could include warnings, removal of voices, ban on the account and, in appropriate cases, reporting to authorities.