Following teen suicides, companion chatbots will need to refer CA users to 988
Published in News & Features
SACRAMENTO, Calif. — Gov. Gavin Newsom signed the state’s first law regulating AI companion chatbots on Monday, but it wasn’t the one some online safety advocates and parents had urged him to approve.
For weeks, parents and safety advocates have been calling on Newsom to back Assembly Bill 1064, which would prohibit the rollout of any AI companion chatbot that could possibly harm a child, including by engaging in sexually explicit conversations or encouraging self-harm, suicide or violence.
Among the advocates were the parents of Adam Raine, a 16 year-old Rancho Santa Margarita teen who killed himself in April after discussing his suicidal thoughts with the general purpose chatbot ChatGPT.
With only hours left to sign legislation, Newsom did not comment on AB 1064.
Instead, he took action on Senate Bill 243, a bill that would require companion chatbots to direct users to suicide crisis lines if they express any mention of self-harm or suicidal ideation. The governor signed the bill among a slate of other regulations intended to protect children online, signalling that while AB 1064 hasn’t been vetoed, the chances of it passing are slim.
“We can continue to lead in AI and technology, but we must do it responsibly — protecting our children every step of the way,” Newsom said in a news release announcing the passage of SB 243 and seven other bills including one that would add warning labels to social media beginning in 2027.
A July study by Common Sense Media found nearly three in four teens had used companion chatbots at least once, and about one in three had used them for social interaction and relationships.
Among other things, SB 243 would require operators of companion chatbots to make sure the bot does not bring up suicide or self-harm, and to respond to a users’ discussion of those topics with a crisis services phone number. It also requires them to disclose to a user that companion chatbots may not be suitable for some minors, and, if the operator knows the user is a minor, remind them to take a break every three hours.
If a company violates the terms, the person who is harmed is allowed to sue for damages.
Critics of 243 argued it was watered down significantly after industry input. Tech Oversight California, Common Sense Media and the California branch of the American Association of Pediatrics withdrew their support during the last days of the legislative session.
Of particular concern was the provision that chatbot operators annually report data to the Office of Suicide Prevention beginning on July 1, 2027.
“Kids are dying now, and mentally unstable people are dying now,” said Sacha Haworth, Executive Director of Tech Oversight Project. “That, to me, was an unacceptably long wait.”
Tech industry groups and the California Chamber of Commerce registered opposition to the bill. The Electronic Frontier Foundation argued the language was too broad, and would invite First Amendment lawsuits and other legal action.
However, the bill did not face as much opposition as AB 1064, which critics warned would break the technology integral to creating a functional AI chatbot.
“This bill will strongly disincentivize companies to make any chatbot products or services available to minors in California,” the Computer & Communications Industry Association wrote in its opposition letter.
Newsom has until midnight on Monday to sign or veto the remaining bills on his desk.
____
©2025 The Sacramento Bee. Visit at sacbee.com. Distributed by Tribune Content Agency, LLC.
Comments