Human vs. AI Bias in Medicine: Which Poses the Greater Risk?

Written by Mireia Cuevas Crespo (Reporter)

In the rapidly evolving field of modern healthcare, biases in patient care are becoming an increasingly controversial issue.

And more often than not, AI is deeply linked to those concerns.

While AI technologies are being developed to assist in many areas of healthcare including cancer research, vaccine production or diagnosis and treatment for depression, scepticism continues to surround its ability to offer sound advice or match the sympathy of human clinicians. As a result, many patients still prefer to rely on human doctors—individuals who can physically interact, understand complex personal health concerns, and offer tailored advice.

But does this human touch really guarantee bias-free care? Are doctors as objective as we think?

AI in Medicine: A Two-Sided Coin

The fact that AI systems are not immune to biases is undeniable.

Recent investigations have evidenced that AI-powered devices can exhibit prejudices in their applications in healthcare.

For instance, two recent studies conducted by researchers at Charles Sturt University (CSU; AU) found that AI devices generated gender and ethnicity bias when creating certain graphics for medical settings.

The studies, published in the Sage Journal and the International Journal of Pharmacy Practice respectively, showed that AI-driven picture representations of pharmacists and undergraduate medical students in Australia exhibited bias against women and ethnic minorities.

These results did not align with the current healthcare context in the country, where, according to CSU researchers, more than 64% of pharmaceutical professionals are women.

Moreover, an excessive reliance on technology in healthcare can result in dehumanized patient-doctor interactions. While interpersonal communication is considered vital to delivering high-quality care, using AI to assist doctors with their interactions with patients might diminish empathetic care, resulting in a process that feels frivolous, impersonal and mechanical.

Additionally, if these concerns weren’t enough, AI also has a track record of inconsistencies regarding device regulation and patient data privacy.

Recently, lax regulatory standards have led to the use of AI systems without appropriate safety and efficacy testing, which has raised significant concerns regarding potential device malfunctions and inaccuracies.

Furthermore, there are serious privacy issues surrounding the way AI systems gather and handle patient data, which could potentially expose private medical records and undermine patients’ confidence in medical professionals.

Human Bias: A Longstanding Concern in Healthcare

Lately, healthcare has seen an increasing proliferation of training programs designed to improve physicians’ communication skills in patient care.

However, healthcare professionals are, at their core, human beings. And as such, they are susceptible to making mistakes and can exhibit biases just as much as AI-powered devices.

Research suggests that while doctors aim to treat all patients equally, cultural preconceptions often lead to unintentional biases in decision-making, exacerbating iniquities in current healthcare systems.

Additionally, burnout caused by work overload is an issue that is closely linked to increased rates of depression and anxiety among healthcare professionals.

Consequently, poor mental health among doctors could decrease the quality of patient care provided and increase the likelihood of inherent biases.

According to recent research conducted by researchers at Lehigh University and Seattle University, this context of lack of trust in human medicine could potentially prompt patients to seek alternative sources for medical advice, such as AI technologies.

The study, published in the Computers in Human Behaviour journal, found that patients appeared to be more open to AI-driven medical recommendations when they were made aware of the biases inherent in human healthcare decisions.

This concept, known as “bias salience”, refers to the process of increasing individuals’ awareness of biases in decision-making, which has been shown to alter people’s perceptions.

“This increased receptiveness to AI occurs because bias is perceived to be a fundamentally human shortcoming. As such, when the prospect of bias is made salient, perceptions of AI integrity—defined as the perceived fairness and trustworthiness of an AI agent relative to a human counterpart—are enhanced.”

Rebecca J. H. Wang, associate professor of marketing at the Lehigh College of Business.

 What the Future Holds

The concerns outlined above emphasize that neither human nor AI-powered medicine is free from errors. So, to answer the initial question of which may pose the greater risk, the most accurate response is likely that both do.

As AI continues to raise significant concerns regarding its exhibition of inherent bias and lack of device reliability in healthcare, the successful implementation of AI-driven devices as a trustworthy tool to enhance contemporary patient care appears to be a distant goal.

However, since a clinician’s mental state can significantly impact the quality of care provided to patients, human medicine has also exhibited flaws regarding bias, despite good intentions. This suggests that clinicians may benefit from AI-driven technologies, not as replacements for human healthcare, but as tools to enhance their practice.

Therefore, to fully leverage the potential of AI to advance modern medicine, further research must address critical issues regarding regulatory shortcomings and patient data.

In the near future, achieving a fair balance between human medicine and technological advancements will be essential for current healthcare systems to operate at peak performance while continuing to safeguard patients’ needs.