Do you think we should give robots a human conscience?

Written by Emma Hall (Digital Editor)

Would it be such a bad idea to give AI a conscience and the ability to empathize? Could a robot ‘soul’ prevent AI from going rogue?

When it comes to AI, I think we can all agree that its creation has not exactly provided the clearest of consciences. Conscience itself is an abstract concept, despite devoting millennia to its explanation and definition, and absurdly in fact probably using our conscious thought to define a conscience, we still cannot agree on what exactly it is. Do animals possess a conscience? Where is it located? What is its purpose?

With the digital revolution exponentially skyrocketing, the pace at which society must adapt to this transformation also accelerates at break-neck speeds. There are all kinds of sci-fi-esque theories about the future flying around: everyone will cruise around in self-driving cars, AI will replace people’s jobs, AI could take over the world, AI could lead to human extinction, and so on. Some of the more cynical of these predictions are based around if AI could become sentient. If it could start to think for itself, then we will no longer be able to control it, or so the theory goes.

Ultimately, building robots with a conscience could mean that they would have emotions and feelings akin to those of humans. However, the very thing that makes us human – the ability to empathize, balance right and wrong, a sense of purpose and free will – can also make us weak, susceptible and inclined to make mistakes. If we were to create this ability in AI, it could become unreliable and prone to making the same errors as people.

This is a particular problem in healthcare. Technology has always had a fundamental role in advancing medicine: it can reduce costs, improve diagnostic accuracy, optimize data collection and analysis, ease workloads and fundamentally improve the safety and quality of healthcare.

But, there are also several drawbacks to digital health. Concerns arise over data security and privacy, lack of regulatory guidelines, algorithmic bias, accountability and job replacement. Some of these issues could be overcome by building sentient AI with emotional intelligence. After all, emotional intelligence is critical to decision-making, building rapports and fundamental to basic and genuine care for humans.

At this point, the problem becomes quite paradoxical. How can we simultaneously create robots with the same moral and ethical values as humans (for which arguably they would need a conscience), yet also prevent them from going rogue (which arguably is a risk if we make them conscious and self-aware)?

A novel book ‘Robot Souls’, authored by Eve Poole, considers this debate. Poole argues that we can ensure that AI remains ethical simply through adding that missing human touch. Through striving to build ‘perfect’ AI, we removed the parts that make us human: feelings, a sense of purpose and autonomy.

“It is this ‘junk’ which is at the heart of humanity. Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us,” Poole explained. “This junk code is in fact vital to human flourishing, because behind all of these flaky and whimsical properties lies a coordinated attempt to keep our species safe. Together they act as a range of ameliorators with a common theme: they keep us in community so that there is safety in numbers.”

She suggests that one of the biggest concerns over AI bias and discrimination could be solved if we added back the ‘junk’ code human characteristics (the soul) we tried to remove from AI to begin with.

Poole proposes several ways that we can go about making robot souls, for example, cooperating on thorough regulation, prohibiting autonomous weapons and creating laws decreeing that the life or death of a human should ultimately only be decided by another human.

“Because humans are flawed we disregarded a lot of characteristics when we built AI,” Poole disputes. “It was assumed that robots with features like emotions and intuition, that made mistakes and looked for meaning and purpose, would not work as well. But on considering why all these irrational properties are there, it seems that they emerge from the source-code of soul. Because it is actually this ‘junk’ code that makes us human and promotes the kind of reciprocal altruism that keeps humanity alive and thriving.”

So what do you think? Should we give robots a soul?