Skip to main content
Politics

The race to regulate AI hits snag; politicians don’t understand the tech

Sep 19, 2023

Share

Should government have a role in regulating artificial intelligence? When asked in a closed-door meeting with tech executives, Senate Majority Leader Chuck Schumer said, “every single person raised their hands, even though they had diverse views.”

The overnight sensation of ChatGPT put a timer on government oversight, as politicians scrambled to convene hearings on Capitol Hill about the need for regulation.

I just hope that this time around, we will do a better job than we did with social media.

Aleksandra Przegalińska, AI senior research associate, Harvard University

The U.S. is behind the ball on regulating AI when compared to the European Union, which passed draft legislation known as the AI Act this summer. The AI Act was first proposed by the European Commission in 2021, over a year before OpenAI released ChatGPT.

Countries and commissions face many challenges when it comes to regulating the fast-moving technology. To start, government regulation has never been able to keep up with technological advances, and politicians have regularly displayed an inability to understand the field. So why are some of tech’s biggest executives pushing for regulation?

AI expert Aleksandra Przegalińska joins Straight Arrow News from Warsaw, Poland, for an in-depth conversation about the future of AI, from job replacement to developing consciousness. Watch the full conversation here.

Below is an excerpt from that interview.

Simone Del Rosario: All of this conversation has really been sparked over who’s going to regulate AI and this urgency behind that effort to regulate AI. Who do you think should be regulating something like this, though? You have to admit that politicians aren’t really the most well-versed in the most groundbreaking technology.

Aleksandra Przegalińska: That is correct. And we’ve seen that with social media and the hearings of Mark Zuckerberg in the Senate. There was a bit of a mismatch in terms of digital competencies.

But ultimately, I think it has to be a collective effort of many different stakeholders because this technology is not only about technology. It’s not about IT people and how they’re going to use it, but it’s a technology that is very, very broad. It’s a general-purpose technology, you could even say.

It’s the type of technology that will penetrate so many different professions, tasks, accountants, health care professionals, different people who are working in various organizations, business, consultancy. Wherever you look, you will see AI. So in that way, I think it has to be a collective effort.

I do regret a bit that this regulation happens this late, because actually, many people from the AI field have been calling for regulation before ChatGPT, way before ChatGPT. And we knew already that there will be some problems because some of these systems are just not explainable. They’re like black boxes. They are very difficult to understand and yet we use them.

We want them to make decisions about important things like giving someone a loan in a bank or not, or declining. So we really need to understand what these systems are doing. And that has been a problem way before ChatGPT.

But now I am sort of glad that there’s at least a debate. And I do hope that this time around, the politicians will come prepared and that they will be better prepared for these types of discussions. They do have experts. They can talk to many people.

I observed what’s been going on at the White House. There was a meeting between Kamala Harris and many representatives of those companies that are building generative tools, generative AI.

There has been a hearing at the Senate where one of the senators said that Sam Altman should tell everyone how to regulate AI. And I don’t think it’s necessarily the best way to go. We need at least a couple of rounds of different consultations. Many companies have to be involved, but also NGOs, civil society, researchers who are not working in private companies but also at universities.

There are many people with good ideas so it has to be a dialogue. And I just hope that this time around, we will do a better job than we did with social media.

Tags: , , , , , , , , , , , , , ,

Simone Del Rosario: All of this conversation has really been sparked over who’s going to regulate AI and this urgency behind that effort to regulate AI. Who do you think should be regulating something like this, though? I mean, you have to admit that politicians aren’t really the most well-versed in the most groundbreaking technology.

 

Aleksandra Przegalinska: That is correct. And we’ve seen that with, you know, the social media and the hearings of Mark Zuckerberg, the Senate, right, that there was a bit of a mismatch in terms of digital competencies. I don’t know how to call it. But ultimately, I think it has to be a collective effort of many different stakeholders because this technology is not only about technology. It’s not about IT people and how they’re going to use it, but it’s a technology that is very, very broad. It’s a general-purpose technology, you could even say. So it’s the type of technology that will penetrate so many different professions, tasks, accountants, healthcare professionals, different people who are working in various organizations, business, consultancy, wherever you really look, you will see AI, right? So in that way, I think it has to be a collective effort. I do regret a bit that this regulation happens this late, because actually, you know, many people from the AI field have been calling for regulation before ChatGPT and way before ChatGPT. And we knew already that there will be some problems because some of these systems are just not explainable. They’re like black boxes. They are very difficult to understand and yet we use them. We want them to make decisions about important things like giving someone a loan, right, in a bank, or not, or declining. So we really need to understand what these systems are doing. And that has been a problem way before ChatGPT. But now I am sort of glad that there’s at least a debate. And I do hope that this time around, the politicians will come prepared and that they will be better prepared for these types of discussions. They do have experts. They can talk to many people. I have to say that I observed, you know, what’s been going on at the White House. There was a meeting, obviously, between Kamala Harris, I believe, and then many representatives of those companies that are building generative tools, generative AI. There has been a hearing at the Senate where one of the senators said that Sam Altman should tell everyone how to regulate AI. And I don’t think it’s necessarily the best way to go. I think it has to be, well, we need at least a couple of rounds of different consultations. Many companies have to be involved, but also NGOs, civil society, researchers who are not working perhaps in private companies, but also at universities. There are many people with good ideas, so it has to be a dialogue. And I just hope that this time around, we will do a better job than we did with social media.