Skip to main content
Kennedy Felton Lifestyle Correspondent/Producer
Share
U.S.

AI is speeding up insurance claims, but at what cost?

Kennedy Felton Lifestyle Correspondent/Producer
Share

  • Artificial intelligence is increasingly being used in health insurance to streamline claims and save costs, but concerns about ethics, oversight and patient outcomes are intensifying. The technology is reshaping the industry while raising critical questions about its long-term implications.
  • Some states, like California, have passed legislation to restrict AI-driven decisions, ensuring human oversight remains central to healthcare.
  • While AI promises efficiency, risks like bias, lack of transparency, and potential errors raise significant questions about its role in medical decision-making.

Full Story

Artificial intelligence is transforming industries worldwide. Now, it’s making waves in health insurance, streamlining claims and customizing coverage.

But as AI becomes more involved in healthcare decisions, questions about oversight, ethics and patient outcomes are growing louder.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Stories like Dr. Elisabeth Potter’s show what’s at stake. The Austin-based plastic surgeon went viral in January after posting a TikTok video saying she received a call from UnitedHealthcare during a breast reconstruction surgery.

“I got a call into the operating room that UnitedHealthcare wanted me to call them about one of the patients that was having surgery today who’s actually asleep having surgery,” Potter said in the video.

Incidents like this highlight growing tensions between healthcare providers and insurers — and why companies are turning to AI for help.

The financial promise of AI in insurance

According to Newsweek, consulting firm McKinsey & Company estimates AI could help health insurers save between $150 million and $300 million in administrative costs — and up to $970 million in medical costs — for every $10 billion in revenue.

University of Pennsylvania professor Hamsa Bastani explained how the process works.

“When a claim comes in, an algorithm can review details like medical codes, patient history, and patterns of past claims to see whether the claim is valid, consistent with policy coverage,” Bastani told Newsweek.

If a claim appears routine, an automated payout may follow. If not, it’s flagged for a human reviewer.

AI regulation varies by state

Because health insurance is regulated at the state level, there is no national policy standard for AI. That’s why some states—including California—are passing their own legislation.

In 2024, Governor Gavin Newsom signed Senate Bill 1120, which prohibits insurance companies from using AI to deny claims outright. At least 10 other states are considering similar legislation, according to NBC News.

The Mercury News reports that 26% of insurance claims in California were denied last year. And in 2023, the American Medical Association found that insurer Cigna denied more than 300,000 claims using an AI-assisted review system.

California State Senator Josh Becker, who authored the bill, explained why human judgment still matters.

“An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences,” Becker said.

“SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians’ access to the quality care they deserve.”

The risks behind the tech

AI in health insurance raises questions and concerns about efficiency and fairness. These systems learn from data, which can reflect bias based on race, gender, or income.

Experts also point to the “black box” problem—when algorithms make decisions without clear explanations. This can make it nearly impossible for patients to understand why their claim was denied.

Another expert told Newsweek that insurance claim evaluators need a confident understanding of the technology to ensure patients aren’t put at risk.

Even the most advanced AI can miss critical context, and when that happens, patients pay the price.

Dr. Potter’s case escalates

In Dr. Potter’s case, even the human process has its pitfalls. After she posted the original TikTok video, she says UnitedHealthcare followed up with a legal letter — and later denied her cancer patient’s hospital stay.

Potter has continued to share updates with her followers, adding fuel to an already heated conversation.

With new laws taking shape across the U.S., one thing is clear. Lawmakers are trying to ensure that AI helps the healthcare system without hurting the people it serves.

More recently, Arizona introduced a bill that would prohibit AI from being the sole factor in decisions to deny, delay or modify healthcare services.

As the use of AI in health insurance grows, the debate over how — and if — it should replace human decision-making is just getting started.

Tags: , , , , , , ,

[KENNEDY FELTON]

Artificial intelligence is making its mark across industries from retail to finance. Now, some health insurance companies are using it to speed up claims and tailor coverage. But how likely is it that your next policy will be powered by AI?

“I got a call into the operating room that UnitedHealthcare wanted me to call them about one of the patients that was having surgery today who’s actually asleep having surgery,” said Dr. Elisabeth Potter, a plastic surgeon.

Stories like these reveal the tension between healthcare providers and insurance companies and the urgent need for smarter, more efficient systems. That’s where AI is starting to step in. According to Newsweek, consulting firm McKinsey and Company estimates that AI could save between $150 million to $300 million in administrative costs and as much as $970 million in medical costs for every $10 billion in revenue.

“When a claim comes in, an algorithm can review details like medical codes, patient history, and patterns of past claims, to see whether the claim is valid, consistent with policy coverage,” said Hamsa Bastani, a University of Pennsylvania professor, speaking to Newsweek.

If the claim looks straightforward, a payout may be automated. Otherwise, it’s kicked to a human reviewer. Because health insurance is regulated at the state level, there’s no one-size-fits-all rule. That’s why states like California are creating their own laws to restrict how AI is used in reviewing claims. In 2024, Governor Gavin Newsom signed a bill into law banning AI from denying insurance claims outright, joining at least ten other states pushing similar legislation.

The Mercury News reports nearly 26% of claims in California were denied last year. In 2023, the American Medical Association found insurer Cigna denied more than 300,000 claims, all during an AI-assisted review.

“An algorithm cannot fully understand a patient’s unique medical history or needs, and its misuse can lead to devastating consequences. SB 1120 ensures that human oversight remains at the heart of healthcare decisions, safeguarding Californians’ access to the quality care they deserve,” said California State Senator Josh Becker, who authored the bill.

Experts say AI systems learn from data, which can be biased based on race, gender, or income. And the technology itself is often a black box, making it nearly impossible for patients to understand why a claim was denied. Another AI expert told Newsweek that claim evaluators need a deep understanding of the systems they’re using, or patients could be at risk. Even the most advanced AI can miss context, and when that happens, it’s the patients who pay the price.

But in the case of Dr. Elisabeth Potter, the human route may not always be the best either. Since posting her video about leaving surgery to take a call from UnitedHealthcare, she says they sent her a legal notice and even denied her cancer patient’s hospital stay.

“The gentleman said he needed some information about her, wanted to know her diagnosis, and whether her inpatient stay should be justified,” Dr. Potter added.