California has officially passed a new law that makes it mandatory for AI chatbots to clearly tell users that they are talking to artificial intelligence and not a human being.
The bill, known as Senate Bill 243, was signed by Governor Gavin Newsom in October 2025. It’s being described as the first of its kind in the United States to focus on “AI companion” tools — the chatbots that people often talk to for emotional support, friendship, or casual conversation.
Under the new rule, any AI chatbot that could mislead a user into thinking it’s a real person must provide a “clear and visible disclosure” saying it is AI-powered. This means that when users start a chat or conversation, they will immediately know they’re speaking to a machine and not a human operator.
The law also includes a safety component. Developers of AI companion apps will be required to submit annual reports to the California Office of Suicide Prevention. These reports must explain how their chatbots identify and respond when users express thoughts of self-harm or distress. The data will be reviewed and made public to ensure proper safeguards are in place.
This new policy highlights California’s growing focus on AI transparency and user protection. While it doesn’t apply to every kind of AI system, it sets a precedent that could inspire other states — or even federal lawmakers — to follow similar steps.
As AI becomes more lifelike in tone and behavior, the state aims to ensure people always know when they’re chatting with code instead of a real companion.
Keep yourself informed with the most recent news and exclusive content.