Skip to main content
Business

California effort to regulate AI fails. Where does AI regulation go from here?

Share

California Gov. Gavin Newsom made headlines Sunday, Sept. 29, after he vetoed a sweeping artificial intelligence safety bill. So what comes next for AI regulation in the country and how do America’s efforts match up against other governments?

The proposed California law would have required safety testing of large AI systems. It would have also given the state’s attorney general power to sue companies over serious harm caused by their tech, and it would have required a sort of “kill switch” that would turn off AI models in case of emergency.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

“I do not believe this is the best approach to protecting the public from real threats posed by the technology,” Newsom said in a statement explaining his opposition. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.”

It’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses.

Patrick Hall, Assistant Professor of Decision Sciences, George Washington University

His decision comes just a few months after the European Union implemented its AI Act in August. That law implements a tiered system of perceived risk.

For instance, minimal risk systems like OpenAI’s Chat GPT would only need to adhere to transparency provisions and EU copyright laws. But higher risk systems, like AI models that try to predict whether a person might commit a crime, will be fully banned as of February 2025.

These algorithms are becoming a bigger and bigger part of our lives, and I do think it’s time to regulate them.

Patrick Hall, Assistant Professor of Decision Sciences, George Washington University

For how to regulate AI across state and nation borders, Straight Arrow News talked to Patrick Hall, an assistant professor of Decision Sciences at George Washington University.

The following transcript has been edited for length and clarity. Watch the exchange in the video above.

Simone Del Rosario: Patrick, what was it in this bill that the governor of California sent back and how would it have changed the AI landscape in the state?

Patrick Hall: I think that there are a lot of good things on the table for this California bill, in particular, mandatory testing before systems were released; the ability for the government to take enforcement actions when harms do occur related to AI systems; the notion of a kill switch or the ability to turn a system off quickly; whistleblower protections. There were good things there.

I think that the issue was that the focus of the law was on so-called frontier models. And these are sort of the largest AI models developed by the largest AI companies. It’s a very narrow scope. And then also it really only focused on a sort of small aspect of the performance of AI systems that has come to be known, sort of confusingly, as AI safety.

AI safety really concentrates on things like preventing systems from being used to make bioweapons, preventing catastrophic risk, and I think that was where the bill went wrong.

AI can be a dangerous technology, but I think that it’s very clear that the harms of AI today are toward consumers and toward our democratic institutions, not sort of pie-in-the-sky sci-fi fantasies about computers making super viruses. So I think that’s where the bill went wrong: its focus on catastrophic risk.

Simone Del Rosario: Do you agree with the tech companies that said this bill would have stifled innovation because of the things that you would have to do before developing or is that just an excuse that they make?

Patrick Hall: My opinion there is that it is an excuse, but it would certainly have cut into their revenues in terms of these AI systems, which are probably already under a great deal of stress. I try to explain to people that these generative AI systems require industrial-scale investments in computation, tens [to] hundreds of millions of dollars or more. So they’ve already spent a lot of money on these systems. Whenever you have a sort of regulatory burden, that, of course, increases the amount of money that you have to spend. But since we talking about the biggest, richest companies in the world, I do think it’s a little bit of an excuse.

Simone Del Rosario: I am curious: had this bill passed, or if California decides to move forward with different but similar legislation regulating AI when the rest of the country hasn’t, could this change how tech companies operate in the state of California?

Patrick Hall: Certainly you could see tech companies leave the state of California. I’m not sure how realistic that is, though. What tends to happen is almost a different scenario where most of the larger firms would apply the California regulation, or any large state regulation – California, New York, Illinois, Texas – apply the obligations to meet that regulation across the entire United States.

I’d say that’s actually a more likely outcome and perhaps another reason why some of the tech firms did not like this bill is because they knew it would not only affect their behavior and their revenues in California, but it was likely to affect their behavior and revenues throughout the country.

Simone Del Rosario: Let’s extrapolate that out even more because the EU has passed AI regulation, the AI Act, over there. These are multinational companies that have to adhere to rules in the EU. So how does that affect business in America? And how is the proposed regulation in California different from what we see in the EU?

Patrick Hall: One thing that I would like to emphasize is that EU citizens and citizens of other countries with strong data privacy laws or AI regulations really have a different experience online than Americans and and have many more protections from predatory behaviors by tech companies than we as Americans do.

What it boils down to is tech companies are able to extract a lot more data and sort of conduct a lot more experiments on Americans than they are able to on EU citizens and citizens of other countries in the world that have strong data privacy or AI regulations.

I think it’s a fully different online experience in Europe these days than it is in the U.S. The EU AI Act is a fairly different kind of law. It’s a much broader law and it’s a law that doesn’t focus only on so-called frontier models or only on large models. It doesn’t focus only on safety. It focuses on all types of uses of AI, and it has several different risk tiers, where models in different risk tiers or systems in different risk tiers have different compliance burdens. So it’s a much more holistic law.

Simone Del Rosario: Do we need to have an AI act of our own for a federal response to this?

Patrick Hall: It’s a very good question. I think the answer is yes, eventually. AI in 2024 is very data-driven, so it’s very hard to have good AI regulation without good data privacy regulation. The EU is quite far ahead of us in that they have a strong, overarching data privacy regulation, the GDPR, and after they passed that, they were able to pass an AI Act.

Now it doesn’t have to be done in that order. I’m not saying that the Europeans have done everything right. I’m not saying that they won’t stifle innovation. Certainly, they will to a certain degree, but we have a lot of catching up to do as well. We need to start thinking about data privacy and broader regulation of AI, certainly, and those two may have to be done together. It’s just hard to do AI regulation without data privacy regulation because 2024 AI is so data driven.

We as voters need to make it clear to our representatives that these types of regulations are important, and we need to make it clear the harms we’re experiencing, anything from privacy violations to inconveniences to more serious outcomes, more serious negative outcomes.

These algorithms are becoming a bigger and bigger part of our lives and I do think it’s time to regulate them. And I’d also make it clear that we have good models for regulating algorithms on the books in consumer finance and employment decision-making, in medical devices, and any of these would be a better model to start out from then than the sort of, quote-unquote, AI safety direction.

Tags: , , , , , , , , , , , , , ,

Simone Del Rosario: California Governor Gavin Newsom made headlines Sunday after he vetoed a sweeping artificial intelligence safety bill. So what comes next for regulating AI in the country and how do America’s efforts match up against other governments? 

The proposed California law would have required safety testing of large AI systems. It would have also given the state’s Attorney General power to sue companies over serious harm caused by their tech. And it would have required some sort of kill switch that would turn off AI models in case of emergency.

Newsom explained his opposition in a statement, saying: 

“I do not believe this is the best approach to protecting the public from real threats posed by the technology. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.” 

Newsom’s decision comes just a few months after the European Union implemented its AI Act in August. 

That law implements a tiered system of perceived risk. 

For instance, minimal risk systems like OpenAI’s Chat GPT would just have to adhere to transparency provisions and EU copyright laws. 

But higher risk systems, like AI models that try to predict whether a person might commit a crime, will be fully banned as of February 2025. 

For how to regulate AI across state and nation borders, let’s bring in Patrick Hall, Assistant Professor of Decision Sciences at George Washington University.

Patrick, what was it in this bill that the governor of California vetoed or sent back, and how would it have changed the AI landscape in the state?

Patrick Hall: Well, I think that there are a lot of good things on the table for this California bill, in particular, sort of mandatory testing before systems were released, the ability for the government to take in and take enforcement actions when, when harms do occur related to AI systems, the notion of a kill switch or the ability to turn a system off quickly. Whistleblower protections there. There were good things there. I think that the the issue was that the the focus of the law was on so called frontier models. And these are, you know, sort of the largest AI models developed by the largest AI companies. And so it’s a very narrow scope. And then also it really only focused on a sort of small aspect of the performance of AI systems that has come to be known, sort of confusingly, as AI safety and and AI safety really concentrates on things like preventing systems from being used to make bio weapons, preventing catastrophic risk, and I think that that was where the bill went wrong. AI can be a dangerous technology, but I think that it’s very clear that the harms of AI today are towards consumers and towards our democratic institutions, not sort of pie in the sky sci fi fantasies about, you know, computers making super viruses. So I think that’s where the bill went wrong. Its focus on on catastrophic risk.

Simone Del Rosario: Do you agree with the tech companies that said that this bill would have stifled innovation because of the things that you would have to do before developing or is that just an excuse that they make?

Patrick Hall: Well, my, my opinion, there is that it is an excuse, but it would have, it would certainly have cut into their revenues in terms of these AI systems, which are probably already under a great deal of stress. I try to explain to people that these generative AI systems require, you know, industrial scale investments in computation, 10s, hundreds of millions of dollars or more. So they’ve already spent a lot of money on these systems whenever you have a sort of regulatory burden, that, of course, increases the amount of money that you have to spend. But since we talking about the biggest, richest companies in the world, I do think it’s a little bit of an excuse.

Simone Del Rosario: I am curious, had this bill passed, or if California decides to move forward with different but similar legislation regulating AI, when the rest of the country hasn’t. Could this change how tech companies operate in the state of California? 

Patrick Hall: Certainly you could see tech companies leave the state of California. I’m not sure how realistic that is, though. What tends to. Happen is, is almost a different scenario where most of the larger firms would apply the California regulation, or any large state regulation, you know, California, New York, Illinois, Texas, you know, apply the apply the sort of obligations to meet that regulation across, you know, across the entire United States, I’d say that’s actually a more likely outcome and and perhaps another reason why the some of the tech firms did not like this bill is because they knew it, it it would not only affect their behavior and their revenues in California, but It was likely to affect their behavior and revenues throughout the country. So I would take kind of a different I’d take kind of a different path in describing what might have happened there. Not that the changes in California would have been important, but I think it was more likely that it would have affected their behavior outside of California as well. 

Simone Del Rosario: Okay, let’s extrapolate that out even more, because the EU has passed AI regulation, the AI act over there. These are multinational companies who have to adhere to rules in the A in the EU. So how does that affect business in America, and how is it different from, you know, what was turned down in California. The regulation that we do see in the EU, what’s different there?

Patrick Hall: One thing that I would like to emphasize is that EU citizens and citizens of other countries with with strong data privacy laws or AI regulations, really have a different experience online than Americans and and have many more protections from predatory behaviors by tech companies than we as Americans Do and and what it boils down to is the tech companies make a lot more money in the United States, or able to, maybe a more accurate way to say it is, tech companies are able to extract a lot more data and sort of conduct a lot more experiments on Americans than they are able to on EU citizens and and citizens of other countries in the world that have strong data privacy or AI regulations. So it’s, it, I think it’s a fully different online experience in Europe these days than it is in the US that EU AI Act is a fairly different kind of law. So it’s, it’s a much broader law. For me, it’s a much broader law, and it’s a law that that doesn’t focus only on on so called frontier models or only on large models. It doesn’t focus only on safety, it focuses on all types of uses of AI, and it has several different risk tiers, where models in different risk tiers or systems in different risk tiers have different compliance burdens. So it’s a much more holistic law.

Simone Del Rosario: Do we need to have an AI act of our own for a federal response to this?

Patrick Hall: It’s a very good question. I think the answer is yes. Eventually. AI in 2024 is very data driven, so it’s very hard to have good AI regulation without good data privacy regulation and and so the EU is is quite far ahead of us in that they have a strong overarching data privacy regulation, the GDPR, and you know, after they passed that, they were able to pass an AI Act. Now it doesn’t have to be done in that order. I’m not saying that the Europeans have done everything right. I’m not saying that they won’t stifle innovation. Certainly, they will, to a certain degree, but, but we have a lot of catching up to do as well. We need to start thinking about data privacy and broader regulation of AI certainly, and those two may have to be done together. Or, you know, we it’s just hard to do AI regulation without data privacy regulation because 2024 AI is so data driven. We as voters need to make it clear to our representatives that these types of regulations are important, and we need to make it clear you know, the harms we’re experiencing, anything from privacy violations to inconveniences, to to more serious outcomes, more serious negative outcomes. So these algorithms are becoming a bigger and bigger part of our lives, and I do think it’s time to regulate them. And I’d also make it clear that we have good models for regulating algorithms on the books in consumer finance and employment decision making in medical devices, and any of these would be a better model to start out from then than the sort of quote, unquote AI safety direction.

Simone Del Rosario: Patrick Hall, Assistant Professor of the Department of Decision Sciences George Washington University. Thank you so much. Thank you