Hold the rAIns: call for AI R&D to cease until it is appropriately regulated

Written by Emma Hall (Digital Editor)

An international group of public health experts and doctors have highlighted several threats that AI could pose to humanity, if we do not employ proper governance and jurisdiction.

Ethics is a term defined as the moral principles of good and bad and right and wrong, of which arguably, AI can often lack. It is well known that AI can be beneficial to medicine, including through automating tasks, helping to store and analyze huge datasets and advancing disease diagnosis. Yet, regardless of its increasingly integral role in modern day society across various sectors, AI systems can also present serious threats to humanity that are integral to address. We must react swiftly and adjust to these transformational technological advances as soon as they arise.

Although, it does seem to appear that we’re finally cracking down on AI regulation, according to a paper published in the BMJ Global Health. An international group of public health experts and doctors have now highlighted three important threats that AI misuse and regulation failures pose to human health. They urge the cessation of AI research development until appropriate governance and policies are established to protect society. These three threats are as follows:

1. Control and manipulation of people:

The first threat emerges from AI’s capacity to swiftly formulate and analyze huge datasets of personal information. This presents opportunities to manipulate individuals’ behavior and destabilize democracy. On a wider global scale, confirmation of such harm exists evidenced in the 2013 and 2017 Kenyan elections, the 2017 French Presidential elections and the 2016 US Presidential elections. Governments and other powerful individuals may also employ AI surveillance to repress and exploit people more directly. Specifically, the team reference China’s Social Credit System, which analyzes big data storage of individuals’ movements, criminal records and financial transactions. This is by no means the only example, with the team emphasizing that a minimum of 75 other governments and authorities are currently developing AI surveillance networks.

In terms of risks relating to medicine, these incorporate “the potential for AI errors to cause patient harm, issues with data privacy and security and the use of AI in ways that will worsen social and health inequalities”, the article authors highlight.

Within healthcare systems, bias against patients with darker skin is a significant concern. For example, the discriminatory AI-driven pulse oximeter that overestimated blood oxygen levels in patients with darker skin led to inadequate hypoxia treatment. All of these examples may propel social disunity and disputes, which will lead to negative effects on public health, either directly or indirectly.

2. Amplifying and dehumanizing lethal weapon capacity:

A second threat involves the use of Lethal Autonomous Weapon Systems (LAWS). These have the ability to independently identify and engage human targets. Disturbingly, LAWS do not require human operation, so reduce human agency and responsibility when injuring or killing people. ‘LAWS can be attached to small mobile devices, such as drones, and could be cheaply mass produced and easily set up to kill “at an industrial scale,’ the article authors caution.

3. Declining human labor:

Although there are clear advantages to automating unsafe, tedious and uncomfortable labor with AI, job loss can also exacerbate unequal wealth distribution and relocate income from labor to capital owners.

“Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health,” the authors point out.

Additionally, self-improving AI (artificial general intelligence – AGI) poses a significant threat to humanity, “We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered,” explain the authors.

“With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimize risk and harm and maximize benefit.”

The solution:

AI must be successfully regulated, and policies must be established to ensure AI’s capacity to benefit society and prevent its posed threats. Transparency and accountability must remain central to these efforts, and for this to happen, international unity and collaboration is essential. Clinicians, in particular, have a fundamental responsibility to recognize and increase awareness around health misinformation provided by AI.

“If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions and dilute power so that there are effective checks and balances.”

A group in the UK, unrelated to the study group, comprised of health professionals, medical charities and other individuals, also insisted that the government update the online safety bill to combat health misinformation.

“One key way that we can protect the future of our healthcare system is to ensure that internet companies have clear policies on how they identify the harmful health misinformation that appears on their platforms, as well as consistent approaches in dealing with it,” the group noted in an open letter to MP Chloe Smith, the Secretary of State for Science, Innovation and Technology. “This will give users increased protections from harm and improve the information environment and trust in the public institutions.”