
Listen to this note:
Artificial Intelligence (AI) is increasingly present in daily life as a tool to streamline processes; in health insurance it is being used to review denied cases through algorithms, however, these do not take into account the special circumstances of each case, which generates discrimination and inequality.
Specialists pointed out during a briefing held by Ethnic Media Services, that healthcare AI is often based on racial and economic biases that increasingly determine who receives treatment and who does not.
Artificial intelligence is thus being used to deny applications for health insurance, posing a health risk in cases requiring personalized medical assessment. An investigation by ProPublica this year revealed that insurers are now routinely denying millions of claims using AI.
Dr. Katherine Hempstead, a policy officer at the Robert Wood Johnson Foundation, explained that it is impossible to determine topics in health insurance policies through AI, since there are a wide variety of possibilities in each case.
“There are many different contexts and the rules are not the same for each insurance company and this creates a feeling of mistrust,” added Hempstead.
She also says that there are more people affiliated with Medical, the problem is that not everyone has the same access to medicines or services, since each case is different and often a feeling of inequality is generated among members.
In addition, he mentioned that insurance policies are increasingly more corporate and not so human, which affects the perception of patients who lose confidence in the system, since they deny requests and some services through the automated system generated by AI.
Dr. Miranda Yaver, an assistant professor of health policy and management at the University of Pittsburgh, conducted a study for her book “Coverage Denied: How Health Insurers Drive Inequality in the United States,” to be published in spring 2026, explaining these health insurance inequalities.
Yaver is concerned that Artificial Intelligence will gain ground in the health field, as an error could be generated and in the medical case it would represent the risk of a life and in the denied cases, some appeal and the result is in their favor, but not always, they are those who need it most, so equity and opportunities are reduced.
“AI has its advantages, but it is also important to think about the implications of these tools, which on the one hand, if they work well, can help us speed up the processes to provide the care that is needed, but on the other hand, they can destabilize, especially marginalized groups, essentially the most vulnerable groups,” said Yaver.
Josh Becker, a California senator and author of SB 1120, the Physicians Make Decisions Act, explained the importance of this law, which limits the scope of AI by requiring physicians to make the final decisions.
The bill is called “Doctors Make Decisions,” and it addresses concerns about medical decision-making that prioritizes patient well-being rather than letting automated systems make decisions that require a trained doctor. This bill aims to address critical gaps in the medical system.
“The algorithm does not have the capacity to make personal and individual decisions, which only doctors can carry out,” he said.
He commented that the use of artificial intelligence in the medical field and health insurance could help evaluate a similar study in the future, but at the moment AI is being talked about as a tool to increase efficiency and cut costs, which creates many threats to health.
Becker shared the case of a doctor who denied 60 thousand cases in a single month, which demonstrates a worrying system that denies patients, since many times they are not even given the opportunity to obtain the treatment they need.
You may be interested in: San José is committed to the responsible use of Artificial Intelligence