Can AI be racially biased?

Written by Emma Hall (Digital Editor)

In a recently published book, ‘Hidden in White Sight’, a data scientist explores the extent to which social discrimination is deeply embedded within AI algorithms used in everyday life.

You might assume that a machine cannot discriminate. After all, how can algorithms be biased when they are not capable of critical thinking or rash decisions based on human emotion? Humans are the ones who make endless errors and are prejudice prone, not machines.

Yet ultimately, an algorithm reflects its designer and how it was developed; if the data a program is trained on is biased, then this will be imitated in the algorithm’s decision-making. Recently, a leading data science expert at IBM (NY, USA), Calvin D Lawrence, has demonstrated that the AI technology within judicial and policing organizations possesses a large number of ingrained biases derived from systemic or institutional preferences and human prejudices.

AI is an integral part of modern day society, employed in almost every area of life. However, it is becoming increasingly apparent that AI utilized in the majority of organizations is systemically racist.

For example, a number of hospital algorithms, employed to make clinical decisions regarding which patients require medical care, base judgements on race and often misdiagnose people of color. Clinical trials are also a typical illustration of how medicine and race can combine negatively. Generally, people of color are hugely under-represented, meaning the data from clinical trials that is an extensive source of information for healthcare algorithms is also unrepresentative.

In the past, non-medical demographic information such as age, ethnicity, race, gender, education, income and employment have been social determinants of health. AI has the potential to be a social equalizer and empower a system based on fairness, however, it has become the opposite. Unfortunately, human biases have crept into AI algorithms through training these algorithms with data that is unrepresentative, unrealistic and inaccurate.

Lawrence’s recent book, ‘Hidden in White Sight: How AI Empowers and Deepens Systemic Racism’, discusses the extent to which social determinants are involved in AI decision making in numerous systems, such as healthcare, education, banking, employment, law, media, loan applications and entertainment. This targets black and ethnic minority groups and can negatively affect these individuals. Although, AI software developers can act to address these internal biases and help flip the switch.

Having designed AI-based software for decades for various organizations such as NASA, IBM, Sun Microsystems and the US Army, Lawrence offers his expertise on how to construct fairer systems. He recommends that technologists and developers thoroughly test their AI system quality and provide full dataset transparency, the right to erasure of personal information, a straightforward way for individuals to review the data documented under their names and viable opt-outs.

“This is not a problem that just affects one group of people, this is a societal issue. It is about who we want to be as a society and whether we want to be in control of technology, or whether we want it to control us,” Lawrence argued. “I would urge anyone who has a seat at the table, whether you’re a CEO or tech developer or somebody who uses AI in your daily life, to be intentional with how you use this powerful tool.”