GIANT Health London 2024: The NHS National AI Conference
This year’s GIANT (Global Innovation and New Technology) event took place on 9th and 10th December 2024 in London’s Business Design Centre (UK.) With a jam-packed agenda, there was plenty to see and do over the two days. Attendees had the opportunity to look at the latest health tech innovations across the exhibition stalls, as well as listen while a range of experts delivered presentations and panelist discussions on all-things AI-related. This article provides some major insights from the most exciting talks from the NHS National AI Conference track on the 10th.
Key Highlights
AI has transformative potential – AI in healthcare is moving fast. There are digital tools being developed to detect skin cancer from a smartphone, and exciting opportunities for live disease monitoring on a national and international scale. AI could speed up diagnoses, forecast waves of infection and efficiently manage workloads, improving work-life balance and ensuring patients can get the care they need.
Barriers to AI – There are still significant hurdles to the integration of AI systems into healthcare; for example, clinicians distrusting AI systems, improper training on how to use new technologies, scalability of models, and AI training methods that need more, high-quality data.
Future outlook – Some general themes surrounding this year’s talks include: experts believe clinicians need better training in AI, extending to future clinical professionals and perhaps even non-clinical workers; they also highlight challenges with scalability, with the potential to exacerbate health inequity if done poorly. They further discuss the need for more clinical data to adequately train AI models.
Cough Radar | Dr Mikael Kågebäck
Dr Mikael Kågebäck (Chief Technology Officer at Sleep Cycle) kicked off the day by introducing Cough Radar, an AI-based app that lets users monitor the prevalence of coughing in their area.
The digital tool builds a comprehensive ‘coughing’ map of the UK using an integration of machine learning algorithms, geolocation data, and sleep sound analysis, recorded from consenting users’ mobile phones.
This digital map contains 70 billion pixels –where each pixel corresponds to 7.5 km– covering 80% of the UK. Users can zoom in and out of the map, providing both regional and national healthcare information at the touch of a button. Crucially, the data collected using Cough Radar fairly represents the UK’s population, spanning across different age ranges and ethnic groups.
Coughing rates are ranked in a percentile system, where rates above the 75th percentile are deemed elevated. This lets people gauge the risk of respiratory infections in their area, helping to reduce the spread of disease.
Impressively, Cough Radar can also predict waves of disease up to two weeks earlier than ‘official’ data. NHS institutions, such as pharmacies or GP practices, can use these predictions to anticipate surges in resource demand, helping everyone to access the care they need.
We caught up with Dr Kågebäck after his talk to discuss Cough Radar:
Understanding Healthcare Professionals’ Experiences with AI | Nara Colton
Nora Colton (Founding Director of the UCL Global Business School of Health) sheds light on the viewpoints of those working in the clinical profession, and how this relates to the future of AI in healthcare.
“Global AI in healthcare markets is expanding fast and furiously. We need to think about how we then adapt our healthcare settings to make use of this.”
It’s no secret that the potential of AI for detecting diseases early on is exciting. However, automated technology could also be used in clerical settings, cutting back time spent on administrative tasks. This could free up time for healthcare professionals to see more patients, while simultaneously encouraging better work-life balance for NHS workers.
Moreover, AI could be used for predictive analyses –using tools like Cough Radar– so that clinics can anticipate their workloads and provide enough resources, i.e., scheduling more staff and stocking up equipment or medications when infection rates are on the rise.
“There’s lots to gain by thinking about how we can embed more technology into healthcare,” says Nora.
However, while AI can theoretically expedite clinical workflows, the reality is starkly different. In fact, professionals are having to work overtime to learn how to use these new technologies. Indeed, Nora reports that the impact of AI is generally quite negative and that negativity has even snowballed in recent times.
She explains that the poor reception of AI has also been compounded by other factors, such as professionals feeling less confident in their own decision-making, increased screen time, anxiety about job security, and distrust in AI:
-
- 3% of healthcare workers worry about the reliability of AI, i.e., lack of explainability when making decisions
- 4% expressed concerns over job security
- 30% distrust AI systems due to lack of transparency and explainability, i.e., black box models
- 40% worry about legal and ethical issues – for example, who is responsible when AI makes mistakes
Nora goes on to stress how the clinical deployment of AI could actually exacerbate health inequity; regions that are sufficiently resourced will benefit from these new technologies, while underfunded areas struggle to keep up if they do not have access to the same tools.
She concludes with some final remarks: “Automation and AI hold transformative potential in healthcare organizations. But, the dual role of enabler and disruptor will require really proactive strategies to mitigate the challenges and make sure that we see the benefits.”
Panelist Discussion: The Human Equation vs. the Algorithm | Professor Shafi Ahmed | Dr Sarah Fothergill | Dr Susanne Gaube | Dr Aqil Jaigirdar
Hosted by multi-award-winning Surgeon Professor Shafi Ahmed, a panel of three experts further explores the nuances of AI and the challenges that different industries are facing. Shafi opens the talk by highlighting that AI is a new milestone in human existence:
“The trajectory of the world changes in different directions. For example, the advent of fire, the motor car, the plane, electricity, the computer. At each point in existence, moving along has suddenly been transformed – AI is one of those points.”
The panel goes on to discuss the opportunities and concerns surrounding this new age. For example, Dr Susanne Gaube discusses how AI should –in the future– reduce the time spent on mundane tasks and leave more room for meaningful patient interactions, although this is not necessarily the case right now. Shafi echoes this sentiment, noting that he currently only spends 20% of his time actually engaging with patients.
Dr Aqil Jaigirdar further adds that freeing up clinicians’ time using AI could bridge the gap between healthcare in the developing and developed world. Currently, there are five doctors for every 10,000 people in Bangladesh; augmenting the work of these doctors would leave them with more capacity to actually see and treat patients. This could have critical implications for achieving health equity on a global scale.
The panel went on to highlight another problem in the future of AI: the training processes of these systems need stronger validation methods, since only 5% of all AI-related healthcare papers actually use patient data:
“There are almost no prospective clinical trials showing the real-life benefit of these systems, especially when implemented in clinical care,” says Susanne.
As such, the accuracy of AI tends to drop significantly when translated from experimental to real-world contexts, perhaps worsened by insufficient training for clinicians, leaving them ill-equipped to use and understand the new technologies at their disposal.
AIs also need significant adjustment based on the location that they are used in; for example, if an AI-based imaging technology is deployed to two different regions with distinct populations, the AI training datasets need to be aligned to reflect this heterogeneity, ensuring fair representation of all groups. This limits the scalability of these tools.
Finally, the panelists drew attention to the issue of distrust between artificial and human intelligence, meaning they tend to reject correct decisions made by AI, even if the clinicians themselves are wrong.
“We need to change our approach.” Says Susanne, “We need to explain the decision strategy of the system to the users, so they can better verify the decisions and outcomes.”
Check4Cancer: Using Digital Healthcare to Train AI | Professor Gordon Wishart
There has been an increase of close to 170% in the number of skin cancer cases in the past 30 years and a 40% rise in the last decade. With this in mind, Professor Gordon Wishart (CEO and CMO of Check4Cancer) draws attention to how AI can help alleviate the strain on NHS clinics to handle these demands:
“We have a major pre-existing workforce problem that has been made worse by the pandemic,” he says, going on to reveal that a study from three years ago found 24% of consultant positions vacant in the healthcare system.
He continues, “There’s a huge issue with trying to recruit workforce into dermatology; this isn’t just a UK problem, it’s worldwide.”
In fact, according to the American Association of Medical Colleges, there is expected to be a shortage of around 120,000 dermatologists by 2034. However, with skin cancer diagnoses currently making up well over 50% of all cancer cases, and with this incidence predicted to rise, clinics need support now more than ever to keep up.
Currently, individuals with suspected skin cancer have to wait up to 18 months for a diagnosis, and even longer for an operation. Fortunately, the work being done at Check4Cancer could significantly cut down waiting times.
Part of Check4Cancer’s strategy is to deliver ‘tele-dermatology’ care, providing patients with remote access to consultations and pathways to specialized care. This lets doctors identify and triage the most urgent cases, freeing up appointments.
The company is further using the data collected through its digital services to train AI models to more easily detect skin cancer.
So far, they have collected almost 80,000 high-quality images, collected from just under 20,000 patients – that have trained the AI to identify cancerous features in photographs. The AI then integrates this visual assessment with seven pre-determined risk factors to prioritize cases that are highly suspicious and need addressing.
Gordon highlights that using real-world clinical data is what sets them apart in the dermatological AI space, a problem that Dr Susanne Gaube stressed during an earlier discussion. Furthermore, their AI integrates clinical metadata with lesion assessments to prioritize the most suspicious cases for further evaluation.
Closing the talk, he outlines the company’s next steps, emphasizing that they are yet to validate the model in darker skin tones. This will be a necessary step to ensure that AI performs equally well in all populations, a crucial step for its potential deployment.
Gordon stopped by for an interview with us to discuss Check4Cancer:
FMAI’s interview with Gordon Wisheart discussing Check4Cancer.
DERM: AI in Action | Dr Keith Grimes | Dr Joshua Luck | Professor David Lowe
Hosted by Dr Keith Grimes (Founder of Curistica), Dr Joshua Luck (AI and Digital Health Fellow at Skin Analytics), and Professor David Lowe (Health Innovation Academic, University of Glasgow) discuss Skin Analytics’ DERM, the only UKCA-approved Class IIa AI as a Medical Device for skin cancer.
DERM has been supported by the NHS since 2016 and is already having a palpable impact. Between December 2023 and September 2024, DERM discharged 20–40% of patients across 20 NHS institutions, correctly identifying benign lesions 99.8% of the time.
This has saved 95% of avoidable urgent suspected cancer referrals, allowing dermatologists to prioritize highly suspicious cases and reduce delays for patients who need immediate treatment.
Shaping the Future of AI Training | Dr Keith Grimes | Dr Alex Aubrey | Professor Nick Fuggle
Nearing the end of the day, Dr Alex Aubrey and Professor Nicholas Fuggle –again hosted by Dr Keith Grimes– debate the crying need for better education around AI to effectively use these systems.
“We’ve got all these technologies coming in now, but we have to be competent in using these tools. It’s not optional,” Keith stresses.
He further highlights an important issue: “Even students going to medical school now don’t have any formal training as part of the law.”
Crucially, in February 2025, the EU will pass legislation that holds every employer accountable for teaching their staff how to properly use high-risk AI products, facing a potential fine of up to 35 million euros if this training is inadequate.
The experts discussed their involvement with initiatives aiming to expand the clinical curriculum so that healthcare professionals are empowered to use the new technology that is coming into clinics.
For example, Nicholas mentions a workshop that he attended in October earlier this year, as part of the Clinical AI Interest Group. The event brought together different stakeholders from the NHS, representatives from the Royal Colleges, and specialists in regulatory and ethical affairs to address what should be included in the curriculum.
He outlines that there are three main aspects to consider:
- Who will the curriculum target? For example, deciding if training should extend to medical students during their undergraduate studies, postgraduate students, or limited to those continuing their medical education. Non-clinical NHS staff may also benefit from training.
- What will the curriculum contain? Should different groups (i.e., surgeons versus GPs) receive tailored training? This will depend on the ‘who’ part of the curriculum.
- How exactly will this education be delivered?
The three considered the significant challenges when it comes to scaling curricula changes. For example, delivering different depths and breadths of training will require thorough logistical planning and enough resources in place. Adequate funding and a sufficient number of experts in the field are needed to ensure that AI training is equal across the country.
They also considered that the fast pace of development in AI means it is difficult to forecast how educational training might need to adapt in the future.
“It [AI] is moving at one hell of a pace,” says Nicholas, “the way in which we engage with AI will rapidly change; the curriculum has to be left flexible enough to include the developments that we will see within the field.”
Closing Remarks
GIANT gave us plenty of insights into the expanding role of AI in healthcare; for example, AI-based software to accurately rule out cancerous lesions and identify urgent referral cases from images in the blink of an eye. This will reduce the volume of patients that have to be seen in person, dramatically speeding up treatment delivery in cases where time is of the essence.
Experts who attended the event also debated the predictive power of AI apps to mitigate disease spread, manage clinical workloads, and guide resource acquisition. Specifically, Cough Radar offers an unprecedented opportunity for live disease monitoring, at regional, national, and global scales.
Despite this excitement in the field, GIANT brought to our attention some key hurdles on the horizon. Namely, these are:
- Ensuring equal access to these technologies so that health inequity is not worsened by AI
- Improving the clinical validity of models using real-world data
- Building trust between healthcare professionals, as well as members of the public, surrounding AI systems
- Providing education and proper training to current NHS workers and future medics, inspiring confidence to efficiently use AI technology
Addressing these challenges will help to propel the age of AI forward – we at Future Medicine AI look forward to watching it unfold!
Want to see more insights from the healthcare AI space? Sign up FREE to for our newsletter here, and follow us on LinkedIn to join in on the conversation