Topline
Character.ai will ban children under age 18 from having “open-ended chats” with its chatbots, the company said in a blog post on Wednesday, a change that comes as the AI company faces bipartisan regulatory pressure and multiple lawsuits from parents of teenagers who died by or attempted suicide after using the chatbot.
The change comes as the company faces lawsuits from multiple families of children who died by or attempted suicide.
SOPA Images/LightRocket via Getty Images
Key Facts
The ban will take effect on Nov. 25, the company said, and in the weeks before the ban will limit underage users to two hours per day on the platform.
The company said it was building a new experience for children, which would still allow them to use Character.ai to create “videos, stories, and streams” on the platform, but not interact with them in an open-ended chat capacity.
Character.ai said it was taking the step after evaluating questions raised by news reports and regulators, insisting “it’s the right thing to do” while also apologizing to their under-18 user base.
Character.ai says it has over 20 million active users, but CEO Karandeep Anand told CNBC only 10% of those are under age 18.
What Is Character.ai?
Character.ai is an AI company that uses large language models to offer individual chatbots with personalities based on specific “characters.” The company says it offers “millions” of these chatbots to users, including historical figures, and fictional characters. It was founded by two former Google engineers, Noam Shazeer and Daniel de Freitas, both of whom returned to Google last year after signing a deal to license Character.ai’s large language models.
Key Background
Character.ai is facing multiple lawsuits from the families of children who died by or attempted suicide after using their chatbots. The first was a wrongful death lawsuit filed last year by the parents of one teenager who died in Florida sued the company after their 14-year-old son died by suicide. The child was using the company’s chatbot for hours to communicate with a chatbot based on the character Daenerys Targaryen from “Game of Thrones,” the New York Times reported. Lawyers for the company have argued that the conversations the teenager had with the chatbot are speech protected by the First Amendment, but a federal judge initially rejected the argument when the company tried to dismiss the lawsuit earlier this year. The case goes to trial in November. Character.ai did not mention the lawsuits in the Wednesday blog post. Character.ai did not immediately return a request for comment on the matter from Forbes.
Are Lawmakers Stepping In?
Character.ai said the new changes were in part a response to questions from regulators, and the company is one of several currently under scrutiny. Character Technologies, Character.ai’s parent company, was one of several AI companies, including OpenAI, xAI, Meta and Alphabet, facing a Federal Trade Commission probe into how their chatbots interact with children. On Tuesday, a group of senators unveiled new legislation that would ban companies offering “AI companions” to underage minors. The bill was authored by a bipartisan group of lawmakers, including Senators Richard Blumenthal, D-Conn., Chris Murphy, D-Conn., Mark Warner, D-Va., Josh Hawley, R-Mo., and Katie Britt, R-Ala., and would also mandate chatbots acknowledge their “non-human status” and lack of professional credentials to all users, regardless of age.
