California just passed a slew of laws meant to direct development of generative AI tools towards the greater good. Once again, California is taking the lead in belling the cat where ideally we would want legislation to come at the federal level. In this article, I will look at the rationale behind two of these regulations and their potential impacts, positive and negative.
A Move in the Right Direction
To show my cards up front, I believe that for the most part, these regulations are geared in the right direction and it is appreciable that California has moved on what is a technologically sophisticated area. It is also appreciable that the legislators have bucked pressure from the tech industry, and a lot of that industry is headquartered in California, to pass these laws. On the other side, there were some other proposed regulations that had more teeth and that fell by the wayside due to industry lobbying.
To get some concrete instantiations of the possible negative effects of AI, one only has to look at a few wildly popular, sharply creative offerings from Hollywood. A list must include Minority Report from way back in 2002 — boy were they prescient about the worst manifestations of Predictive AI! On the Generative AI side, two movies that predated the ongoing revolution in the field are Her (2013) and Ex Machina (2014). The common theme is that left unregulated, AI will take humankind down, in a battle of unequals. Driven in part by such Hollywood offerings, there has been longstanding and growing unease with the epochal changes being brought about by AI.
Can We Watermark AI-generated Content?
In this landscape, California has stepped to pass a slew of laws, culminating in October. The first one I will consider is the AI Transparency Act, AB 853. Broadly put, it allows us to determine if some content is AI generated or not. It says that generative AI systems must include a disclosure in AI-generated images, video, or audio content. This is essentially a hidden digital marker that conveys information like the provider’s name, the AI system’s name and version, the date of creation, and a unique identifier. The disclosure must be permanent or “extraordinarily difficult” to remove. The AI service provider must also offer a free, public AI detection tool capable of determining if content that a user uploads to check is AI generated or not.
As developers and consumers of AI — prominent categories being content creators and educators — we have long wished for such technology. Wouldn’t it be great for an investigative journalist to figure out whether some video is related to what she is investigating, or from a different time or place or altogether fake, before she runs with it? Wouldn’t it be great for the college professor to know if the assignment essays she is reading are LLM generated? So this regulation is hands down a move in the right direction.
Now comes the question of technical feasibility. There are two main open technical questions, which makes this regulation not 100% enforceable. The first is the question of digital watermark being hard to remove or tamper with. This field of research dates back at least 25 years, and this is still quite an active cat-and-mouse game on this topic. The bottom line is that this is not a solved problem. The second issue is remixing AI generated content, perhaps even from multiple providers. Say I generate a fantasy image of my favorite basketball player jumping on Mars to do a slam dunk. And then I generate, perhaps using a different tool, my favorite scientist blocking a slam dunk. And then, just because I have time on my hands, juxtapose the two. Would it be possible for the tool from either provider to tell the provenance of the finished image? What if I juxtapose with something from a real-world scene that I captured with my camera? The answer is it is not a slam dunk, yet. But hey! if we wait for 100% solutions before regulation can move, then we will get nowhere.

Chatbots as Companions?
The second regulation is titled “Companion chatbots”, SB 243. One of the compelling arguments that made the legislators act were tragic cases of teenage suicides after long interactions with chatbots. Among other things, the law will require companies to remind users that they are interacting with a chatbot and not a human. For minor users, a reminder window will pop up after the first three hours of interacting with a chatbot. Companies will also be required to maintain and implement a protocol to prevent self-harm content, refer users to crisis service providers, publish details about how the protocol works on their website. The law also requires companies to submit annual reports to the California Office of Suicide Prevention beginning in July 2027, which will require, among other things, disclosure of the number of crisis service referrals the chatbot has made. This last bit should be useful to mental health providers and researchers who can make progress in this area.
This regulation is a watered down version of a previous bill that was vetoed by the Governor of California this October (AB 1064). The bill would have prohibited companies from making a companion chatbot available to a minor unless it could guarantee that a whole set of tragic circumstances could not occur. This is an example where I believe the original bill would have done more harm than good because it would have been practically impossible to provide guarantees of the kind asked for. And that would have meant effectively kids would be disallowed from using chatbots. For anyone who have seen kids using chatbots for school homework to planning social events, the lack of chatbots would have been tantamount to some other earth-shattering consequence, like not being able to access social media during school. Data from the advice and research group Internet Matters in July says that two-thirds of 9-17 year olds have used AI chatbots. I will take their word for it but will also state for the record that I am yet to meet anyone from the other one-third.

To Sum
So, to sum this entire development is positive, a set of regulations that does a reasonable job of understanding the nuances of generative AI and directing it for greater good, despite deep-pocketed lobbying from the industry. But this also highlights the pressing need of regulation at a federal level. Imagine that each of the 50 states comes up with its own set of regulations, one-by-one, and where regulations do not have a nicely totally ordered property (so that if one satisfies the most stringent regulation, then those of all other states would be automatically satisfied). Rather, there will be partially overlapping sets of regulations, some even contradictory. And have a thought for those poor AI companies — well most are valued at multiple billions of dollars — so billions-of-dollars-poor AI companies. They will have to craft 50 different versions of their products. This is clearly not viable. So to make sure that AI technology continues to flourish and that it benefits large parts of society, we need to move on regulations at the federal level. So let’s get moving … hopefully, after we have solved a few other pressing issues like getting the government reopened.