Gemini 3 and other chatbots scrutinized as unreliable news gatherers


This recording was made using enhanced software.

Summary

AI latest update

Google’s Gemini 3, the company’s latest AI update, joins other tools like Microsoft’s Copilot and OpenAI’s Atlas in shaping how people access information.

AI in newsgathering

Jeffrey Blevins, professor at the University of Cincinnati, says AI can be a helpful first step in research but is prone to errors, including factual mistakes and engagement-driven biases.

AI boom in users

Studies show widespread AI use, with 62% of Americans interacting with it weekly, yet many AI-generated news responses contain inaccuracies, according to an international study.


Full story

Artificial intelligence tools like Google’s Gemini, Microsoft’s Copilot, and OpenAI’s new search engine, Atlas, are quickly becoming the primary way people search for information. But Jeffrey Blevins, a professor at the University of Cincinnati and a faculty fellow of the Center for Cyber Strategy and Policy, says consumers should be cautious when using AI to gather news. The warning comes as Google rolls out Gemini 3, its latest major update to its AI model.

Blevins told Straight Arrow News that even calling these systems “intelligent” can be misleading. 

“I’ve never been comfortable with the term ‘intelligence,’” he said. “‘Artificial’ I’m good with, and to me, ‘algorithm’ is just a much better moniker for that.”

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

First step, not final source

He said AI can be a helpful starting point when researching news or politics, but it is far from reliable as a sole source of information. 

“We should absolutely not be relying on it. At best, it’s a first step,” Blevins said. “Then, I need to be willing to take the next action steps and go to different sources to verify that.”

Blevins pointed to recent high-profile AI errors in newsrooms, including an AI-generated summer reading list published by the Chicago Sun-Times that included mismatched authors and several nonexistent books. 

“Sometimes the titles weren’t perfect, and sometimes they were books or authors that didn’t exist at all,” he said. “So, that’s a pretty big miss.”

Beyond factual mistakes, Blevins warns that AI platforms are designed to maximize engagement. 

“There’s a commercial interest here, and that is to keep you engaged,” he said.

AI use expansion

Recent data shows just how widespread AI use has become. A Pew Research Center survey found that 62% of Americans interact with artificial intelligence at least several times a week, highlighting how these tools are quickly becoming part of daily life. 

Meanwhile, an international study examining AI-generated responses to news prompts found that 45% contained at least one significant issue, and 81% had some form of inaccuracy. Researchers said the findings underscore concerns about relying on AI systems for timely or factual news. 

As AI tools continue to expand, Blevins says the responsibility falls on consumers to verify information through trusted, established sources rather than the fastest or most convenient ones.

Tags: , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

The widespread use and rapid adoption of AI tools for information searching raises concerns about accuracy, reliability and the need for users to verify news from trustworthy sources.

AI reliability

As noted by University of Cincinnati professor Jeffrey Blevins, AI systems have demonstrated factual errors and inaccuracies, which highlights the risk of relying solely on these technologies for news and information.

Consumer responsibility

According to Blevins, users must take extra steps to verify information found through AI, emphasizing the importance of consulting multiple and established sources rather than accepting AI responses at face value.

Commercial interests

Blevins points out that AI platforms are designed to maximize user engagement, which may influence the information presented and underlines potential conflicts between commercial goals and accurate news delivery.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.

By entering your email, you agree to the Terms and Conditions and acknowledge the Privacy Policy.