Skip to main content
Business

First-of-its-kind AI law goes into effect in EU; US companies in the crosshairs

Share

The world’s first artificial intelligence (AI) legislation went into effect Thursday, Aug. 1, in the EU. The AI Act, as its known, will regulate how companies develop and use the technology.

Media Landscape

See who else is reporting on this story and which side of the political spectrum they lean. To read other sources, click on the plus signs below. Learn more about this data
Left 33% Center 67% Right 0%
Bias Distribution Powered by Ground News

The law is facing criticism that it could discourage innovation before it even happens. But the European Commission didn’t pass it overnight. In fact, the law was first proposed back in 2020.

“It’s been drafted for the past few years and ChatGPT happened in the meantime,” Aleksandra Przegalinska, a senior research associate at Harvard University, told Straight Arrow News in July 2023.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

After tweaks to adjust to the ever-changing generative AI reality, the commission passed the law in May of this year.

“It’s a regulation that looks at AI from the perspective of risk, mainly,” Przegalinska explained. “It says, okay, most of the applications of artificial intelligence that we have seen so far, we could call them minimal risk; but there are others that are high risk and there is also a way of using artificial intelligence that we would rather ban; like social scoring, for instance, or surveillance systems of different kinds.”

Last year during the State of the EU address, European Commission President Ursula von der Leyen spoke about the need to quickly regulate AI.

“AI is a general technology that is accessible, powerful and adaptable for a vast range of uses — both civilian and military,” von der Leyen said in September. “And it is moving faster than even its developers anticipated. So we have a narrowing window of opportunity to guide this technology responsibly.”

The AI Act separates types of technology into four different categories:

  • Prohibited AI systems will be banned as of February 2025. This could apply to AI which tries to predict whether a person might commit a crime based on their characteristics or one that scrapes the internet to bolster facial recognition systems.
  • High risk AI systems have the highest regulatory burden outside of those that are outright banned. This includes AI that is used for critical infrastructure like electrical grids, systems that make employment decisions, and self-driving vehicles. Companies with AI that fall into this category will have to disclose their training datasets and prove human oversight.
  • Minimal risk systems make up the largest chunk of innovation at about 85%. This is what’s known as “general-use AI.” The category includes generative AI like OpenAI’s ChatGPT or Google’s Gemini. For these types of AI, creators will need to make sure their models are adhering to EU copyright rules and take proper cybersecurity precautions to protect users. It will take effect in 12 months.
  • The fourth category is no risk. This is pretty self-explanatory and is for any AI use that doesn’t fall into the other three categories.

“We Europeans have always championed an approach that puts people and their rights at the center of everything we do,” von der Leyen said in a video posted to X. “So with our Artificial Intelligence Act, we create new guardrails not only to protect people and their interests but also to give business and innovators clear rules and certainty.”

While the rules are created to protect citizens of the EU, American tech companies will likely be most affected by it.

In recent years, Microsoft, Google, Amazon, Apple and Facebook-parent Meta have spent massive amounts of money developing AI models.

The rules will be governed by the European Commission’s AI office. A spokesperson for the commission said they will staff around 140 people.

If a company fails to comply with the new rules, it could face fines of $41 million or up to 7% of its global revenue. And the regulatory environment could force these tech giants to make a big decision.

Meta already announced it wouldn’t make its Llama AI model available in the EU. But that’s not because of the AI Act; it was already worried about the bloc’s General Data Protection regulation.

Member states have until August of 2025 to put together bodies that will handle execution of the law in their country.

Meanwhile, companies that already have a commercially available product like ChatGPT will have a 36-month grace period to come into compliance.

Tags: , , , , , , , , , , , , , , , , , ,

Simone Del Rosario:

First comes the innovation. Then comes the regulation.

In a global first, a brand new Artificial Intelligence law is now in effect in the EU as of Thursday.

The legislation is known as the AI Act. It’ll regulate how companies develop and use the new technology, which critics say could crush innovation before it happens.

It’s been a long time coming, it was first proposed by the EU back in 2020.

Aleksandra Przegalińska:

It’s been drafted for the past few years and chat GPT happened in the meantime.

Simone Del Rosario:

After some tweaks to adjust to AI’s new generative reality, it passed in May of this year.

Aleksandra Przegalińska:
It’s a regulation that looks at AI from the perspective of risk, mainly, right? Where it says.
Okay, most of the applications of artificial intelligence that we have seen so far, well, we could call them minimal risk, but there are others that are high risk and there is also a way of using artificial intelligence that we would rather ban, right? Like social scoring, for instance, or surveillance systems of different kinds.

Simone Del Rosario:

Last year during the State of the EU address, European Commission President Ursula von der Leyen talked about the need for speed when it comes to regulating AI.

Ursula von der Leyen:

“AI is a general technology that is accessible, powerful and adaptable for a vast range of uses – both civilian and military. And it is moving faster than even its developers anticipated. So we have a narrowing window of opportunity to guide this technology responsibly.”

Simone Del Rosario:

The AI Act separates AI systems into 4 buckets based on risk level.

Prohibited AI systems will be banned as of February 2025. This applies to, say, AI that tries to predict whether a person might commit a crime based on their characteristics. Or one that scrapes the internet to bolster facial recognition systems.

The next level is high-risk AI systems, which’ll have the highest regulatory burden outside of those that are outright banned. This includes AI that is used for critical infrastructure like electrical grids, systems that make employment decisions, and self-driving vehicles.

Companies with AI that falls into this category will have to disclose their training datasets and prove human oversight.

Next is Minimal Risk systems that make up the largest chunk of innovation at about 85%. This is what’s known as “general-use AI.” It includes generative AI like OpenAI’s ChatGPT or Google’s Gemini. For these types of AI, creators will need to make sure their models are adhering to EU copyright rules and take proper cybersecurity precautions to protect users. It’ll take effect in 12 months.

The fourth category is No Risk. That’s pretty self-explanatory and is any AI use that doesn’t fall into the other three categories.

Ursula von der Leyen:

“We Europeans have always championed an approach that puts people and their rights at the center of everything we do. So with our Artificial Intelligence Act, we create new guardrails not only to protect people and their interests but also to give business and innovators clear rules and certainty.”

Simone Del Rosario:

While the first of its kind legislation is there to protect E.U. citizens, American tech companies will feel the brunt of the regulation.

In recent years Microsoft, Google, Amazon, Apple and Facebook-parent Meta have dumped huge amounts of money into developing AI models.

If a tech company fails to comply with EU’s new rules, it could face fines of $41 million dollars or up to 7% of their global revenue, whichever is higher.

It could result in companies taking their ball and going home. Meta has already said it wouldn’t make its LLaMa AI model available in the EU. But not because of the AI Act, it was already worried about the bloc’s General Data Protection regulation.

American tech giants have faced the ire of EU regulators, who have imposed billions in fines in recent years.

There is a grace period, though. Many systems already commercially available like ChatGPT will have 36 months to come into compliance.

For the latest AI stories straight to your phone, download our Straight Arrow News app.