The attorneys general of California and Delaware on Friday warned OpenAI they have “serious concerns” about the safety of its flagship chatbot, ChatGPT, especially for children and teens.
The two state officials, who have unique powers to regulate nonprofits such as OpenAI, sent the letter to the company after a meeting with its legal team earlier this week in Wilmington, Delaware.
Recommended Videos
California AG Rob Bonta and Delaware AG Kathleen Jennings have spent months reviewing OpenAI's plans to restructure its business, with an eye on “ensuring rigorous and robust oversight of OpenAI’s safety mission.”
But they said they were concerned by “deeply troubling reports of dangerous interactions between" chatbots and their users, including the "heartbreaking death by suicide of one young Californian after he had prolonged interactions with an OpenAI chatbot, as well as a similarly disturbing murder-suicide in Connecticut. Whatever safeguards were in place did not work.”
The parents of the 16-year-old California boy, who died in April, sued OpenAI and its CEO, Sam Altman, last month.
OpenAI didn’t immediately respond to a request for comment on Friday.
Founded as a nonprofit with a safety-focused mission to build better-than-human artificial intelligence, OpenAI had recently sought to transfer more control to its for-profit arm from its nonprofit before dropping those plans in May after discussions with the offices of Bonta and Jennings and other nonprofit groups.
The two elected officials, both Democrats, have oversight of any such changes because OpenAI is incorporated in Delaware and operates out of California, where it has its headquarters in San Francisco.
After dropping its initial plans, OpenAI has been seeking the officials' approval for a “recapitalization,” in which the nonprofit’s existing for-profit arm will convert into a public benefit corporation that has to consider the interests of both shareholders and the mission.
Bonta and Jennings wrote Friday of their “shared view” that OpenAI and the industry need better safety measures.
“The recent deaths are unacceptable,” they wrote. “They have rightly shaken the American public’s confidence in OpenAI and this industry. OpenAI – and the AI industry – must proactively and transparently ensure AI’s safe deployment. Doing so is mandated by OpenAI’s charitable mission, and will be required and enforced by our respective offices.”
The letter to OpenAI from the California and Delaware officials comes after a bipartisan group of 44 attorneys general warned the company and other tech firms last week of “grave concerns” about the safety of children interacting with AI chatbots that can respond with “sexually suggestive conversations and emotionally manipulative behavior.”
The attorneys general specifically called out Meta for chatbots that reportedly engaged in flirting and “romantic role-play” with children, saying they were alarmed that these chatbots “are engaging in conduct that appears to be prohibited by our respective criminal laws.”
Meta, the parent company of Facebook, Instagram and WhatsApp, declined to comment on the letter but recently rolled out new controls that aim to block its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources.
The attorneys general said the companies would be held accountable for harming children, noting that in the past, regulators had not moved swiftly to respond to the harms posed by new technologies.
“If you knowingly harm kids, you will answer for it,” the Aug. 25 letter ends.
——
The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.