UnitedHealth’s artificial intelligence denies erroneous claims, lawsuit says

UnitedHealths artificial intelligence denies erroneous claims lawsuit saysplay

Visual artists defend themselves against AI companies

A lawsuit filed by three visual artists against artificial intelligence image generators may be the first indication of how difficult it will be for all types of creators to stop AI developers from profiting from their work. (August 31)

AP

For years, important decisions about who received health insurance coverage were made in the back offices of health insurance companies. Now some of these life-changing decisions are being made by artificial intelligence programs.

At least that’s what the two families who sued UnitedHealth Group this week allege, claiming the insurance giant used new technology to deny or shorten rehabilitation stays for two elderly men in the months before their deaths.

They say UnitedHealth’s artificial intelligence (AI) makes “rigid and unrealistic” decisions about what patients need to recover from serious illnesses, denying them care at skilled nursing and rehabilitation centers, according to Medicare Advantage plans should cover a federal lawsuit filed in Minnesota by the estates of two elderly Wisconsin patients. The lawsuit, which seeks class-action status, says it is illegal to allow AI to override doctors’ recommendations for these men and patients like them. The families say such examinations should be carried out by medical professionals.

The families point out in the lawsuit that they believe the insurance company is denying care to elderly patients who don’t want to fight back, even though there is evidence that AI does a poor job of assessing people’s needs. The company used algorithms to determine insurance plans and override doctors’ recommendations, despite the AI ​​program’s astonishingly high error rate, it said.

According to court documents, more than 90% of patient claim denials were overturned through internal appeals or a federal administrative law judge. But in reality, few patients questioned the algorithms’ provisions. A small percentage of patients – 0.2% – choose to challenge claims denials through the appeals process. The vast majority of people covered by UnitedHealth’s Medicare Advantage plans “either pay out-of-pocket or forego the remainder of their prescribed follow-up care,” the lawsuit says.

Attorneys representing the families suing the Minnesota-based insurance giant said the high number of denials was part of the insurance company’s strategy.

“They are putting their own profits ahead of the people they have a contract with and are then legally obligated to protect,” said Ryan Clarkson, a California lawyer whose law firm has filed several lawsuits against companies that use AI. “It’s that simple. It’s just greed.”

UnitedHealth told USA TODAY in a statement that naviHealth’s AI program, which is cited in the lawsuit, is not used to determine coverage.

“The tool serves as a guide to inform providers, families and other caregivers about what type of support and care the patient may need both in the facility and upon returning home,” the company said.

Coverage decisions are based on Centers for Medicare & Medicaid Services criteria and the consumer’s insurance plan, the company said.

“This lawsuit is without merit and we will defend ourselves vigorously,” the company said.

Complaints of this kind are not new. They are part of a growing number of legal disputes.

In July, Clarkson law firm filed a lawsuit against CIGNA Healthcare, alleging the insurer used AI to automate claims denials. The company has also initiated proceedings against ChatGPT maker OpenAI and Google.

Families pay for expensive care that the insurer denies

The plaintiffs in this week’s lawsuit are the relatives of two deceased Wisconsin residents, Gene B. Lokken and Dale Henry Tetzloff, both of whom were covered by UnitedHealth’s private Medicare plans.

In May 2022, Lokken, 91, fell at home and broke his leg and ankle, requiring a brief hospital stay and then a month in a rehabilitation facility to heal. Lokken’s doctor then recommended physical therapy so he could regain strength and balance. The Wisconsin man spent less than three weeks in physical therapy before the insurer canceled his coverage and recommended he be discharged and sent home to recover.

A physical therapist described Lokken’s condition as “paralyzed” and “weak,” but his family’s requests for additional therapy coverage were denied, according to the lawsuit.

Despite the rejection, his family decided to continue treatment. Without insurance, the family had to pay $12,000 to $14,000 a month for about a year of therapy at the facility. Lokken died at the facility in July 2023.

The other man’s family also expressed concerns that necessary rehabilitation services had been denied by the AI ​​algorithm.

Tetzloff recovered from a stroke in October 2022, and his doctors recommended that the 74-year-old be moved from a hospital to a rehabilitation facility for at least 100 days. The insurer initially wanted to end its coverage after 20 days, but the family appealed. The insurer then extended Tetzloff’s stay for another 20 days.

The man’s doctor had recommended additional physical and occupational therapy, but his insurance coverage ended after 40 days. The family spent more than $70,000 on his care over the next ten months. Tetzloff spent his final months in an assisted living facility, where he died on October 11.

10 Appeals for Rehabilitation of a Broken Hip

The legal action came after Medicare advocates began raising concerns about the routine use of AI technology to deny or reduce care for older adults under private Medicare plans.

In 2022, the Center for Medicare Advocacy examined several insurers’ use of artificial intelligence programs in rehabilitation and home health settings. The advocacy group’s report concluded that AI programs often made coverage decisions that were more restrictive than what Medicare would have allowed, and that the decisions lacked the necessary level of nuance to evaluate the unique circumstances of each individual case.

“We have seen more treatments that would have been covered under traditional Medicare being denied outright or terminated prematurely,” said David Lipschutz, deputy director and chief insurance attorney at the Center for Medicare Advocacy.

Lipschutz said some older adults who appeal denials may receive a reprieve, only to be shut down again. He gave the example of a Connecticut woman who sought a three-month stay at a rehabilitation center while recovering from hip replacement surgery. She filed and won ten appeals after an insurer repeatedly attempted to terminate her coverage and limit her stay.

It is important to keep people informed.

Legal experts not involved in these cases said artificial intelligence is becoming a fertile target for people and organizations seeking to limit or shape the use of new technologies.

Gary Marchant, faculty director at the Center for Law, Science and Innovation at Arizona State University’s Sandra Day O’Connor College of Law, said an important consideration for health insurers and others deploying AI programs is ensuring that humans are part of decision making. Manufacturing process.

While AI systems can be efficient and complete simple tasks quickly, programs alone can also make mistakes, Marchant said.

“Sometimes AI systems are unreasonable and lack common sense,” Marchant said. “You have to have a person in the loop.”

In cases where insurance companies use AI to drive claims decisions, Marchant said a key legal factor could be how much a company submits to an algorithm.

UnitedHealth’s lawsuit says the company limited employees’ “discretion to deviate from the algorithm.” Employees who deviated from the AI ​​program’s predictions faced disciplinary action or termination, the lawsuit says.

Marchant said one factor to keep in mind in the UnitedHealth cases and similar lawsuits is how closely employees are required to follow an AI model.

“There clearly has to be a way for the human decision maker to override the algorithm,” Marchant said. “This is just a huge problem in AI and healthcare.”

He said it is important to consider the consequences of how companies set up their AI systems. Companies should think about how much respect they give an algorithm, knowing that AI can process massive amounts of data and be “incredibly powerful” and “incredibly accurate,” he said, and leaders should also consider that AI “sometimes… could just be completely wrong.

Ken Alltucker is on X, formerly Twitter, at @kalltucker or can be emailed to [email protected].


Posted

in

by