Skip to main content
Tech

The EU is close to regulating AI. What about us?

Share

The rollout of commercial AI tools in recent months has shown that the technology can make mistakes. It can hallucinate, show bias and discrimination and it can be misused by humans.

These fears have sent governments scrambling to form regulation about a technology that many don’t fully understand yet.

The European Union is already prepared to vote on a proposal for AI regulation, while the United States is only just beginning the process.

“My worst fears are that we cause significant — we, the field, the technology industry — cause significant harm to the world,” OpenAI CEO Sam Altman said at a congressional hearing in early May.

With a wave of new AI products coming out every week, the road to regulation can be as steep as the learning curve.

Suresh Venkatasubramanian has an insider’s perspective on where the U.S. is in this process. Venkatasubramanian is a professor of computer science and data science at Brown University. Previously, he worked as the assistant director for science and justice in the White House Office of Science and Technology Policy. There, Venkatasubramanian co-authored the Blueprint for an AI Bill of Rights.

The Blueprint for an AI Bill of Rights is a set of principles for guiding the development and use of AI. Published in October of 2022, it calls for safe and effective automation, protections against discrimination, data privacy, notice and explanation, and more.

“We think of it in terms of we should be regulating businesses, entities that do things that impact people, because the government is the keeper of the public interest,” Venkatasubramanian said. “Think about credit. Think about housing, you think about policing and criminal justice. These are specific areas where the public is impacted, and now increasingly impacted by the use of technology, whether it’s AI machine learning or what have you.”

The Blueprint for an AI Bill of Rights is not legally binding, but it provides examples of the risks Americans face without responsible innovation in the AI realm.

Without transparency, for example, child welfare investigations could open against parents based on an algorithm, without notifying those parents and giving them an opportunity to contest the decision.

A predictive policing system could identify people at greater risk of being involved in gun violence and put them on a watch list. Those people might receive no explanation for how they ended up on such a list.

Venkatasubramanian says different agencies are working internally to develop guidelines for using AI within their scopes. For example, in May, the Equal Employment Opportunity Commission issued guidance on how employers can use AI for recruitment and retention without violating federal anti-discrimination law.

While U.S. guidelines and legislation are lagging behind commercial innovation, Venkatasubramanian doesn’t think that’s a huge issue.

“The fact that products are coming out quickly, does not mean that things are changing very quickly,” he said. “ChatGPT changed a lot of things. I agree. But it hasn’t changed the fundamentals of how we should think about regulating systems that affect us all.”

Concerned about the lack of regulation, tech giants Elon Musk and Steve Wozniak — along with other industry insiders — signed an open letter calling for a pause in large AI experiments while policymakers catch up.

But there’s no telling how long it would take for Congress to develop and pass concrete legislation. However, what is known is that there’s an assortment of proposals in Congress.

Some lawmakers have proposed a ban on facial recognition technology, while others are calling for more transparency and accountability from AI developers. It is still too early to say what form AI regulation will take in the United States, but it’s clear that the conversation has begun.

Tags: , , , , ,