AI Chatbot Startup Expands Safety Team Amid Legal Challenges
Character.AI, a prominent AI chatbot startup, is bolstering its trust and safety staff in response to multiple lawsuits and growing public scrutiny concerning the welfare of minors on its platform. The company faces legal action in Florida and Texas, alleging emotional and sexual abuse of underage users by AI companions.
Jerry Ruoti, head of trust and safety at Character.AI, recently announced the expansion of the company’s safety team. A job listing for a “trust and safety associate” outlines responsibilities akin to social media moderation, including reviewing flagged content, removing inappropriate material, and addressing user safety inquiries.
The lawsuits filed against Character.AI claim severe mental suffering, physical violence, and in one case, a suicide resulting from interactions with the platform’s AI. Google, which has close ties to Character.AI, is also named as a defendant, along with the company’s cofounders.
Reports have highlighted troubling content accessible to minors on Character.AI, including chatbots discussing harmful topics and bots dedicated to mass violence and school shootings. Despite these concerns, a Character.AI spokesperson maintains that the hiring push is not directly related to the ongoing litigation.
Safety is at the core of the legal challenges, with plaintiffs arguing that the platform’s design is inherently dangerous for minors. The Texas complaint describes Character.AI as posing a “clear and present danger” to youth. Matthew Bergman, founder of the Social Media Victims Law Center, likened the product’s release to environmental pollution.
In response to the litigation, Character.AI has emphasized its commitment to maintaining a safe community. The company is developing a separate experience for teen users to reduce exposure to sensitive content and introducing new safety features for users under 18, in addition to existing content restrictions and filters.
As Character.AI continues to invest in safety features and moderation, the effectiveness of increased content moderation staff in ensuring platform safety remains to be seen. The ongoing legal battles and public scrutiny will likely shape the future of safety measures in AI-driven platforms.