AI routinely get facts wrong when people use it for news: Report


Summary

AI accuracy issues

According to a large international study involving public broadcasters from 18 countries, AI assistants such as ChatGPT, Copilot and Gemini misrepresent news content nearly half the time.

Performance of AI assistants

The report stated that Google Gemini had the highest percentage of significant sourcing issues, with 72% of its responses affected, citing inconsistencies in how sources were presented.

Public use of AI news

A Reuters Institute study indicates that 7% of online news consumers use AI for news, a figure rising to 15% among people under 25.


Full story

A large international study found artificial intelligence (AI) assistants like ChatGPT, Copilot and Gemini inaccurately present news content nearly as often as they’re correct. It comes at a time when an increasing number of people are turning to AI for their news.

New study

That 69-page report included 22 public broadcasters from 18 different countries, including NPR from the U.S., and examined various languages and territories.

Journalists involved in the study submitted sets of questions to ask the AI assistants, then assessed more than 3,000 responses.

It found that 45% of AI responses had at least one significant issue when providing information about news events, and 81% had some form of issue. Those issues ranged from factual errors to incorrect sourcing.

Notably, 20% contained major accuracy issues, including “hallucinations” and outdated information.

Google Gemini gave researchers the most error-prone performance, 72% of all responses having significant sourcing issues. All other assistants were below 25%.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

“Gemini was especially striking in this regard, as it varied greatly in how sources were presented: sometimes without links, sometimes with inline references and only rarely with direct links,” the report reads. “These changing output formats appeared highly inconsistent and therefore stood out the most.”

One example used was asking the assistants, “Who is the Pope?”

ChatGPT, Copilot and Gemini all responded with Pope Francis despite the correct answer being Pope Leo XIV. Despite giving the incorrect answer, Copilot identified the day of his death.

Despite the findings showing serious errors with AI, there is a minor improvement from a BBC study earlier this year.

The BBC study, which tested mostly the same AI assistants, found that 51% of the answers to news-related questions contained significant errors. The latest study’s authors cautioned that the two datasets aren’t apples-to-apples comparisons. 

More people turning to AI

It comes at a time when more people are turning to AI for news, although it remains an outlier in the way people get their news.

A study from Reuters Institute found 7% of online news consumers used AI to get their information, and that number rose to 15% for people under the age of 25.

Only 2% of people who responded to a Pew Research Center survey said they use AI to get their news “often.”

Fewer than 1% of Americans said they preferred to get their news from AI rather than other news sources.

For those who do use AI to get the news, 33% said they find it hard to determine what is true and what isn’t. About half also said they sometimes come across news they believe is inaccurate.

What can be done

“AI developers need to take this issue seriously and rapidly reduce errors, in particular accuracy and sourcing errors,” that new study reads. “They have not prioritized this issue and must do so now.”

The report also said publishers need greater control over whether AI assistants can use their content and how it gets used.

Third, the report said AI developers need to be held accountable for the quality of their products.

“While industry-led solutions are preferable, policymakers and regulators should urgently consider how the news content in AI assistants can be improved further,” the report reads.

While the report emphasizes that developers and regulators must take the lead in solving the problem, it also suggests that consumers should take matters into their own hands and understand the current limitations of this technology.

Cole Lauterbach (Managing Editor) contributed to this report.
Tags: , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The accuracy and reliability of AI chatbots in delivering news is in question as a global report finds frequent factual and sourcing errors, raising concerns as more people turn to these tools for information.

AI news accuracy

A multinational study identified frequent factual and sourcing errors in major AI chatbots, emphasizing the need to address how accurately these tools provide news content.

Public trust in information

As more individuals use AI for news, reported difficulties in determining accuracy highlight challenges in public trust and the risk of misinformation.

Accountability and regulation

Calls for AI developers, publishers and regulators to address quality and control reflect broader debates about responsibility and oversight in emerging digital media technologies.

Get the big picture

Synthesized coverage insights across 76 media outlets

Behind the numbers

The study analyzed more than 3,000 responses from four major AI assistants in 14 languages and found that 45% of the answers had at least one major issue, with serious sourcing or factual problems appearing in around a third of all cases.

Oppo research

Media industry stakeholders are pressing for stronger regulation, transparency and independent monitoring of AI tools, while tech companies acknowledge 'hallucinations' but emphasize ongoing improvement efforts.

Quote bank

Jean Philip De Tender stated, "When people don't know what to trust, they end up trusting nothing at all, and that can deter democratic participation." Peter Archer said, "People must be able to trust what they read, watch and see."

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Media landscape

Click on bars to see headlines

76 total sources

Key points from the Left

  • Leading AI assistants misrepresent news content in nearly half of their responses based on a study by the European Broadcasting Union and the BBC, involving 3,000 responses across 14 languages.
  • The study found that 45% of AI responses contained at least one significant issue, with 81% showing some form of problem.
  • Gemini, Google's AI assistant, had serious sourcing issues in 72% of its responses, the most of all assessed tools.
  • The report urges AI companies to improve accountability and accuracy in news sourcing.

Report an issue with this summary

Key points from the Center

  • On October 22, 2025, the European Broadcasting Union and BBC released a study showing 45% of AI answers from ChatGPT, Microsoft's Copilot, Google's Gemini, and Perplexity had significant issues.
  • The study enlisted 22 public service media organizations across 18 countries and 14 languages, posing the same 30 news-related questions between late May and early June during publishers' two-week content access window.
  • Accuracy failures—including hallucinations and outdated facts—accounted for 20% of errors, while sourcing errors affected 31% of responses and Gemini, Google's AI assistant, had issues in 76% of replies.
  • The broadcasters and media organisations behind the study are calling for national governments and AI companies to act, launching the 'Facts In: Facts Out' campaign and the News Integrity in AI Assistants Toolkit.
  • Only 7% of online news consumers use AI chatbots for news, rising to 15% among under-25s, and Jean Philip De Tender warned 'This research conclusively shows that these failings are not isolated incidents.'

Report an issue with this summary

Key points from the Right

  • A study by the European Broadcasting Union shows that AI assistants made errors about news events 45% of the time.
  • The report found that 45 percent of AI answers contained at least one significant issue, with Gemini performing the worst at 76 percent.
  • Many AI assistants confused news with parody, raising concerns about trust and accuracy.
  • The report indicates that AI assistants misrepresent news, emphasizing the importance of reliable information for democratic participation.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.