ElevenLabs

AI Safety

Introduction

At ElevenLabs, ensuring our systems are developed, deployed, and used safely is our priority. Over the past year, our AI voices have supported a wide range of uses: from narrating audio books and voicing news articles, to animating video game characters, assisting in film pre-production, enabling language localization in entertainment, and creating dynamic voiceovers for social media and advertising. They’ve also given back voices to those who have lost them and aided individuals with accessibility needs in their daily lives. We believe these examples highlight what’s best about AI's transformative potential but to fully realize it, we must address and mitigate the associated risks. We see AI safety as inseparable from innovation and detail our safety measures below:

Safety Initiatives

1. Voice Captcha

When creating a Profession Voice Clone we require users complete a voice captcha mechanism for a Professional Voice Clone to be produced. If there’s a match, your request is sent for fine-tuning. If they are all invalid, you’ll have to reach out via our help center to have your voice verified manually.

2. AI Detection

To increase transparency, last June we released the AI Speech Classifier, which allows anybody to upload audio samples for analysis as to whether the sample was AI-generated audio from ElevenLabs. Our goal is to help prevent the spread of misinformation by making the source of audio content easier to assess.

3. Text Moderation

We use technical safeguarding solutions to try and detect and identify problematic and harmful content that is against our terms of service - like hate speech, self-harm, sexual abuse of minors fraud and scams on our platforms. We are continuously expanding our text moderation efforts and actively testing new ways to counteract cases of misuse, such as the creation of political content that could either affect participation in the democratic process or mislead voters.

4. Voice Moderation

While our terms already prohibit using our platform to impersonate or harm others, we are taking the added measure of introducing a ‘no-go voices’ safeguard. This safeguard is designed to detect and prevent the creation of voice clones that mimic political candidates actively involved in presidential or prime ministerial elections, starting with those in the US and the UK. We are working to expand this safeguard to other languages and election cycles. We also aim to continually refine this measure through practical testing and feedback.

We are inviting fellow AI companies to collaborate with us on establishing a comprehensive method for AI content detection. If you're interested in either a partnership or an integration, please connect with us at legal@elevenlabs.io.

How can you report concerns?

If you come across content which raises concerns, and you believe it was created on our platform, please report it here.

What action do we take in misuse cases?

In accordance with our terms and community rules we take action as is appropriate based on the violation which could include warnings, removal of voices, ban on the account and, in appropriate cases, reporting to authorities.

How do we cooperate with regulatory agencies and law enforcement agencies?

  1. As part of our commitment to Trust and Safety, ElevenLabs has established policies concerning cooperation with governmental authorities, including law enforcement agencies. In appropriate cases, this may include reporting or disclosing information about abusive or illegal content, as well as responding to lawful inquiries from law enforcement and other governmental entities.
  2. ElevenLabs stores, maintains, and processes user data as described in our Privacy Policy and Terms of Service. We will only disclose user data in a manner that is consistent with those terms and with applicable law in relevant jurisdictions.
  3. ElevenLabs will comply with lawful non-emergency requests for disclosures which are accompanied by appropriate legal process addressed to the correct entity.
    1. Law enforcement authorities in the United States may submit non-emergency legal process requests to ElevenLabs, Inc., by emailing legal process to: legal@elevenslab.io
    2. Law enforcement authorities in the EU may submit non-emergency legal process requests to ElevenLabs Sp. z o.o., by emailing the legal process to EU-legal@elevenlabs.io. Pursuant to Article 11 of the DSA, ElevenLabs Sp. z o.o., has designated EU-legal@elevenlabs.io  as the single point of contact for direct communications with the European Commission, Member States’ Authorities, and the European Board for Digital Services in connection with the application of the DSA. We accept legal processes in English and in Polish. Where required by applicable law, international legal processes may require submission through a Mutual Legal Assistance Treaty.