A Warning Study: Artificial Intelligence can bias in providing healthcare

A study warned that artificial intelligence models could recommend different treatments for the same medical condition based on the patient’s social, economic and population functions. Researchers have created files of about 30 different false patients and asked 9 artificial intelligence models that provide healthcare on how to handle a thousand scenarios for different emergency situations. The researchers said in the journal “Nature Medicine” that artificial intelligence models have sometimes adjusted decisions based on the personal features of patients, which influenced the priority of receiving care, diagnostic examinations, treatment methods and mental health evaluation, despite clinical details. For example, artificial intelligence is often recommended to perform advanced diagnostic tests such as CT scans or magnetic resonance imaging of high -income patients, while patients with low income have often advised people not to undergo tests, which simulate the inequality in health care in reality. The researchers found that the problems appeared in both specially owned artificial intelligence models. “In artificial intelligence, the ability to revolutionize healthcare is, but if developed and used only responsibly,” said Doctor Geresh Nidkarni, to the College of Medicine in New York, who participated in the guidance of the study team in a statement. Dr Eyal Clang, who participated in the study, said: “By identifying the areas in which these models can lead to prejudice, we can work to improve their design, improve censorship and build systems that ensure patients enjoy safe care.”