Senators Request Safety Records from AI Chatbot Apps


0


Two senators have sent a letter to multiple AI companion companies requesting information about their safety practices, including details about internal safety assessments and timelines of the implementation of guardrails, as CNN reported yesterday.

The action follows the filing of two high-profile child welfare lawsuits against the Google-tied chatbot startup Character.AI, which has been accused in court filings by three families of facilitating the sexual and emotional abuse of minor users, allegedly resulting in severe mental and emotional suffering, violence, behavioral changes, and one death. (Google and Character.AI cofounders Noam Shazeer and Daniel de Freitas are also named as defendants in the lawsuits.)

Drafted by democratic senators Alex Padilla of California and Peter Welch of Massachusetts, the letter cites details of the accusations against Character.AI as cause for alarm, calling specific attention to concern for minor users of AI companion apps and directly referencing Sewell Setzer III, a 14-year-old user of Character.AI who died by suicide in February 2024 after extensive and extraordinarily intimate interactions with the platform’s anthropomorphic chatbots.

“We write to express our concerns regarding the mental health and safety risks posed to young users of character- and persona-based AI chatbot and companion apps,” reads the letter, “including Character.AI.”

Setzer’s death, first reported by The New York Times, made headlines in October after his mother, Megan Garcia, filed the first of the two ongoing child welfare suits against the chatbot startup. The second complaint was filed in Texas in December on behalf of two more families whose minor kids are said to have experienced significant mental and physical harm as a result of using the service. One of them, who was 15 when he started using the app, began physically self-harming after discussing self-injury with a Character.AI bot.

Both lawsuits, which together argue that Character.AI and its benefactor Google knowingly released a dangerous and untested product into the marketplace, have made waves amongst the public, particularly parents. And now, it looks like lawmakers on Capitol Hill are paying attention.

“In light of recent reports of self-harm associated with this emerging application category, including the tragic suicide of a 14-year-old boy,” the letter continues, asking that recipients “respond in writing outlining what steps you are taking to ensure that the interactions taking place on your products — between minors and your artificial intelligence tools — are not compromising the mental health and safety of minors and their loved ones.”

Per a press release, the letter was sent to Character.AI, Chai Research Corp, and Replike maker Luka, Inc.

Replika, which has been a player in the digital companion space for many years, is currently facing a Federal Trade Commission complaint from advocacy groups alleging that it’s engaged in deceptive marketing practices aimed at hooking vulnerable users. Other Replika controversies include its alleged role in encouraging a troubled young man in the UK to attempt to assassinate the late Queen Elizabeth II with a crossbow, as well as men using the app to abuse virtual girlfriends.

All three companies offer a version of a similar product: access to emotive, lifelike chatbots designed to embody specific personas. (Think characters like “goth girlfriend,” ersatz AI versions of celebrities, fake therapists or other professionally-styled bots, or virtually any fictional character in existence.)

In some cases, users carry on extensive fictional roleplays with the bots; others treat the characters like trusted confidantes, with users frequently developing emotional, romantic, or sexual relationships with the characters.

But while companion apps have proliferated amid the AI boom, experts have consistently warned that the same design features that make them so engaging — their penchant for sycophancy and flattery, always-on availability, and human-like tenor, to name a few — may put vulnerable users at a heightened risk for harm.

The senators’ letter speaks to these concerns, writing that the “synthetic attention” such bots give to users “can, and has already, led to dangerous levels of attachment and unearned trust stemming from perceived social intimacy.” This trust, they add, can “cause users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation — complex themes that the AI chatbots on your products are wholly unqualified to discuss.”

The lawmakers are requesting a few different pieces of information from the AI companies. They first ask that the firms provide them with the “current and historical” safety guardrails enacted in their products — and, importantly, a timeline of their implementation. (Character.AI, for example, has historically been extremely reactive to apparent gaps in safety guardrails, repeatedly promising to add new safety features after controversies arise.)

The senators are also requesting that the companies provide them with information about the data used to train their AI models, and how that training material “influences the likelihood of users encountering age-inappropriate or other sensitive themes.”

The companies are further asked to disclose details about safety personnel, as well as a description of the services and support provided to safety-oriented staffers like content moderators and AI red-teamers, whose work often necessitates contact with sensitive or disturbing material.

Luka and Chai did not respond to CNN‘s request for comment. In a statement, Character.AI told CNN that the company takes the senators’ concerns “very seriously.”

“We welcome working with regulators and lawmakers,” the company added, “and are in contact with the offices of Senators Padilla and Welch.”

As it stands, like other generative AI firms, these companion companies have operated within a largely unregulated federal landscape. To that end, the senators’ letter is still exploratory, and is among the first steps by Capitol Hill lawmakers to investigate the safety measures — and perhaps more tellingly, founding safety practices and principles — at industry leaders like Character.AI, Replika, and Chai.

But it’s a step nonetheless. And given that Character.AI, in particular, has repeatedly declined to provide us with information about how it’s assessed the safety of its platform for minor users, we’ll be paying close attention to what happens next.

More on Character.AI and safety: Character.AI Says It’s Made Huge Changes to Protect Underage Users, But It’s Emailing Them to Recommend Conversations With AI Versions of School Shooters


Don’t miss the Buzz!

We don’t spam! Read our privacy policy for more info.

🤞 Don’t miss the Buzz!

We don’t spam! Read more in our privacy policy


Like it? Share with your friends!

0

0 Comments

Your email address will not be published. Required fields are marked *