Medical AI, Robotics and Technology Conference 2025: Future Medicine AI Round-Up

The Medical AI, Robotics and Technology Conference 2025 brought together clinicians, policymakers, technologists, and innovators to explore the evolving role of AI and robotics in healthcare.
With the NHS facing increasing demand and workforce pressures, the event provided an opportunity to examine how these technologies influence diagnostics, surgery, clinical decision-making, and administrative workflows.
Throughout the day, speakers discussed the progress that has been made and the challenges that lie ahead, including regulation, patient safety, workforce adoption, and the need for strong clinical evidence. What emerged was a balanced and thoughtful view of how AI and robotics can enhance healthcare delivery while addressing real-world constraints.
AI and the Future of the NHS
Speaker: Hulya Mustafa, Director of Digital Policies and Programmes, Department of Health and Social Care.
The opening keynote by Hulya Mustafa set a measured, pragmatic tone. As Director of Digital Policies and Programmes at the Department of Health and Social Care, she is at the forefront of AI governance in the NHS. Mustafa was quick to emphasize that AI is not just another technological breakthrough—it is a pivotal shift in how healthcare operates.
“The trajectory of the world changes in different directions,” she said, pointing to historical milestones—the invention of electricity, the motor car, the computer. “At each point in existence, moving along has suddenly been transformed, AI is one of those points.”
The NHS has already made strides in AI adoption, particularly in diagnostics. Mustafa highlighted an ongoing £21 million investment in AI-driven imaging, including a groundbreaking AI-powered breast cancer screening trial involving 700,000 women across 30 NHS sites. The goal was to reduce radiologists’ workloads, speed up early detection, and ultimately improve survival rates.
Yet despite this progress, barriers remain. Mustafa pointed to scalability issues, workforce scepticism, and a lack of standardized regulations as major hurdles. She introduced the AI and Digital Regulation Service, a new initiative aimed at helping the NHS navigate AI compliance and safety protocols more efficiently.
“The NHS has always been a leader in medical innovation, but we need to get better at moving AI from research into real-world use,” she stressed.
Her final message was clear: AI should support, not replace, healthcare professionals. The focus must remain on patient safety, transparency, and ethical deployment.
A New Era: Robotics in Clinical Medicine
The integration of robotics and AI into healthcare is transforming the landscape of medical procedures, particularly in fields like interventional oncology and minimally invasive surgery. While the potential of these technologies is undeniable, their successful implementation raises a host of challenges; from precision and clinical adoption to regulatory hurdles and legal responsibility.
In the realm of interventional oncology, Dr. Ed Johnston from the Royal Marsden NHS Foundation Trust shared how robotics is already revolutionizing cancer care. Traditionally, minimally invasive tumor ablation has been an effective alternative to surgery for small tumors (<3cm), but precision has always been a concern. Dr. Johnston explained how manual needle placement has long been subject to variation depending on the surgeon’s expertise, but robotic-assisted systems now provide millimetre-level accuracy, significantly reducing recurrence rates and improving survival outcomes.
“With robotics, we can now achieve millimeter-level accuracy, reducing recurrence rates and improving survival outcomes.”
Dr. Ed Johnston, Academic Consultant in Interventional Oncology, Royal Marsden NHS Foundation Trust.
Using real-world examples from his own practice, he described how robotic systems are enabling complex, multi-needle positioning, allowing the treatment of larger tumors, up to 10cm, that were previously considered inoperable.
Dr Mark Slack, Co-Founder and Chief Medical Officer of CMR Surgical, took the audience behind the scenes of creating the Versius robotic-assisted surgical system. Whilst the clinical potential of robotics has been well established, the journey from concept to widespread clinical adoption has proven complex to create surgical robots that can be trusted and widely adopted by surgeons.
Dr. Slack detailed the technical challenges, regulatory requirements, and the critical importance of clinician acceptance in ensuring the success of robotic surgery:
“Building a robot isn’t just about engineering,” Slack emphasized. “It’s about navigating complex regulation, designing robust training, ensuring clinical adoption, and—most importantly—proving it actually improves outcomes for patients.”
He also highlighted the importance of addressing “invisible work” such as ergonomics, usability, and reliability, ensuring that robotic systems assist rather than complicate procedures. Although Versius has been used in over 71,000 procedures, Slack acknowledged the challenges of making robotic systems affordable, intuitive, and seamlessly integrated into existing surgical workflows. The ultimate challenge, he noted, is proving the value of robotic systems in every operation and overcoming resistance from clinicians.
It was a reminder that innovation doesn’t stop at invention—it requires persistent effort to translate into meaningful clinical change. We caught up with Dr Slack to further discuss the challenges in implementing and adopting robotic surgery:
Adding a crucial perspective to the discussion, Flora McCabe, Head of Advocacy and Risk Management at Lockton Companies, addressed the legal and ethical questions surrounding the growing use of AI and robotics in surgery. Who is accountable when robotics procedures go wrong? McCabe stopped by to answer some of these questions:
As AI and robotics-assisted procedures are becoming more common, a key question emerges: when an AI/robotics-driven decision causes harm, who is accountable; the surgeon, the hospital, or the software developer?
She highlighted real-world incidents, such as the tragic death of a patient during a robot-assisted heart surgery and an incorrect surgery in which a robot mistakenly removed a woman’s ovaries, with no clear accountability.
McCabe stressed the urgent need for stricter regulations, including mandatory training for surgeons, defined liability for errors, and clear consent protocols to ensure the safe and ethical use of AI and robotics in patient care.
GoodSAM: AI-Powered Emergency Response
Speaker: Dr. Mark Wilson, Neurosurgeon and Co-Founder of GoodSAM.
Dr Mark Wilson introduced the audience to GoodSAM, an AI-powered emergency response platform that is saving lives in real-time.
Wilson opened the talk with a stark statistic: “For every minute without CPR, survival rates drop by 10%.” Traditional emergency response systems are reactive, however, not always efficient, and often take precious minutes to identify cardiac arrests and dispatch help.
GoodSAM is helping to close that critical gap. By using AI to analyse emergency calls in real-time, it can detect cardiac arrests and instantly alert the nearest trained responder—often reaching the scene before an ambulance. The platform can also guide bystanders step-by-step through CPR using a smartphone, ensuring life-saving action begins as quickly as possible.
Wilson emphasized that GoodSAM is not about replacing emergency services—it is about enhancing them:
“AI isn’t doing the CPR,” he said. “But it is making sure it starts as soon as possible.”
Heidi Health: AI That Listens and Acts
Oscar Bown, Associate, Heidi Health.
Oscar Bown’s session on Heidi Health was about how AI can alleviate the growing administrative burden on doctors. He introduced Heidi as an AI-powered medical scribe designed to listen during consultations, transcribe conversations in real time, and automatically generate structured clinical notes.
“Heidi is essentially an AI assistant that eliminates the need for doctors to spend hours typing up notes,” Bown explained. “It lets clinicians focus on their patients rather than their keyboards.” The impact has already been significant, with documentation time reduced by 85% in emergency departments and work-life balance for GPs improving by 45%.
However, Heidi’s ambitions extend beyond improving documentation. Bown revealed that the AI tool is being designed to integrate with electronic health records, enabling automated test ordering and preliminary diagnostic reports. This marks a shift from AI as a passive documentation tool to an active participant in medical decision-making.
“Doctors didn’t train for years just to spend half their time typing notes,” Bown said. “AI should be lifting that burden, not adding to it.” This theme of AI as a practical tool carried over into the next discussion, which explored its potential to predict surgical risks before a patient even enters the operating room.
“We’re not trying to take the decision out of clinicians’ hands,” Reed was careful to stress. “We’re trying to support them with better data — because when patients are put in the wrong setting, everybody loses.” This clinician-led perspective offered a thoughtful counterbalance to earlier concerns raised by Steve Lee about AI’s role in critical decisions.
AI-Enabled Care and Clinical Learning
Speakers: Dr Daniel Kraft, Founder, NextMed Health, Digital Health & Continuum Health Ventures; Dr Ryan Kerstein, Lead for RCS (Eng) I-Hub & Consultant Plastic Surgeon; Dr Dmitriy Chernov & Dr Noam Roth, West Hertfordshire Teaching Hospitals NHS Trust.
As the day progressed, attention turned toward broader reflections on the role of AI and digital technologies in shaping healthcare’s future. Several speakers explored how these tools are moving from theoretical promise to practical application.
Dr Daniel Kraft encouraged the audience to imagine what care could look like if AI and digital tools were fully embedded into clinical pathways. He emphasized the potential of AI to support earlier diagnosis, personalize treatment, and enable ongoing monitoring, but cautioned that this will only succeed if AI is seamlessly integrated into clinical workflows. Rather than being seen as a standalone solution, AI should serve as part of a clinician’s toolkit, helping to improve patient care without adding unnecessary complexity.
Building on this, Dr. Ryan Kerstein reflected on how AI, robotics, and extended reality are already being used to enhance surgical training, diagnostics, and hospital operations. He urged a critical approach to adopting new technologies, reminding the audience that innovation must serve a purpose—whether improving outcomes, reducing costs, or supporting staff—rather than being introduced for its own sake.
Adding a practical application, Dr Dmitriy Chernov and Dr Noam Roth explored how AI can improve morbidity and mortality reviews, a vital but often overlooked part of healthcare where teams learn from adverse events. They described how AI can analyze large volumes of clinical cases quickly to identify patterns that may go unnoticed in traditional reviews. This could help organizations address systemic issues rather than focusing narrowly on individual cases.
Together, these sessions highlighted that AI’s greatest value lies in augmenting human expertise, supporting safer care, and improving efficiency across the healthcare system.
Towards the Automation of Duodenal Biopsy Diagnosis with AI
Speaker: Florian Jaeckle, Chief Technology Officer, Lyzeum Ltd.
Florian Jaeckle brought the conversation to discuss the challenges faced by pathologists, particularly in diagnosing immune disorders like coeliac disease.
His session highlighted how AI is beginning to address gaps in diagnostic capacity and how machine learning has the potential to advance the field of pathology. Jaeckle explained that coeliac disease, although common, is still one of the most frequently underdiagnosed conditions. Many patients experience years of unclear symptoms, repeated misdiagnoses, and ineffective treatments, which AI-driven diagnostics could help to overcome.
At the heart of the issue is the complex and subjective nature of biopsy interpretation, which relies heavily on the pathologist’s experience, training, and the quality of the sample. “We still depend on manual examination, and as with any human task, it introduces variability — sometimes at the cost of the patient’s health,” Jaeckle observed.
Lyzeum’s AI aims to tackle these inconsistencies by offering a fully digitalized pathology system, where biopsies are scanned into high-resolution images and reviewed by AI models trained on thousands of annotated slides. Using one microscopic slide image, this AI tool can quantify features like lymphocyte density and tissue architecture, identifying patterns that are invisible to the naked eye.
What made Jaeckle’s presentation compelling was not just the technology itself, but the focus on equity and access. He emphasized that patients’ outcomes should not depend on which pathologist or hospital they happen to be assigned to. The AI system can triage normal from abnormal cases, allowing pathologists to focus on more challenging diagnoses — a potentially critical improvement in the NHS, where the workforce is increasingly stretched thin.
We interviewed Jaeckle to gain more insights into Lyzeum’s AI technology:
Predicting Joint Replacement Failure from Radiographs and its Wider Application
Speaker: Mr Vipin Asopa, Consultant Orthopaedic and Trauma Surgeon, Epsom and St Helier University Hospitals NHS Trust.
Mr. Vipin Asopa explored how AI is providing practical solutions to persistent clinical challenges, focusing on its potential to predict joint replacement failures before they happen.
Asopa highlighted that up to 15% of total hip replacements fail within 15 years, often due to aseptic loosening. For patients, this can mean facing complex revision surgeries that are both costly and life-altering.
“Joint replacement is meant to restore mobility and quality of life — but when it fails, patients can end up right back where they started,” he remarked.
To address this, Asopa and his team have developed an AI model trained on over 2,000 X-ray images, capable of detecting subtle early signs of implant failure, often before they are visible to the human eye. Impressively, the AI outperformed clinicians in spotting early failures.
The model’s predictive capability allows clinicians to identify patients at risk of implant failure well before symptoms develop, giving surgeons the opportunity to intervene early and potentially prevent costly complications.
Looking ahead, Asopa shared ambitions to expand this AI model to total knee replacements and spinal conditions like scoliosis, broadening its clinical value.
Finding Our Way: AI for Surgical Risk Prediction
Speaker: Professor Mike Reed, Consultant Orthopaedic Surgeon & OPCI.ai.
The discussion moved on to another promising use of AI in orthopaedic surgery, this time focusing on predicting surgical risk long before patients reach the operating theatre. Professor Mike Reed presented OpenPredictor, an AI-driven tool designed to help NHS clinicians assess patient risk and ensure they are directed to the most suitable surgical setting.
OpenPredictor’s AI analyses a range of patient data including blood tests, comorbidities, and demographics to calculate a surgical risk score and recommend the appropriate care setting. Reed was keen to emphasize that this is not about taking away clinical decision-making but about empowering clinicians with better information to plan effectively. It is a decision-support tool, not a decision-making tool, he reiterated firmly.
By using AI in this way, Reed argued that we can significantly reduce last-minute cancellations, improve patient outcomes, and make smarter use of limited NHS resources. It was a message that clearly resonated with an audience well aware of the pressures facing surgical services.
Key Takeaways
- AI must complement, not replace, clinical expertise. Trust, transparency, and responsible use are essential for AI to succeed in healthcare.
- Robotics is transforming precision in surgery and oncology, but scaling adoption requires addressing costs, training, and workflow integration.
- Data-driven AI tools are moving care upstream, enabling earlier risk identification and prevention, as seen in surgical triage and joint failure prediction.
- Regulation and governance are critical, ensuring AI and robotics are safe, ethical, and effectively embedded in care pathways.
- AI is enhancing—not replacing—clinical decision-making, from M&M reviews to consent processes and diagnostics like biopsy interpretation.
- AI’s reach is broadening, with innovative uses in emergency response (GoodSAM) and clinical documentation (Heidi), helping to alleviate pressure on overstretched clinicians.
- Legal and ethical challenges must keep pace with innovation, as AI and robotics gain autonomy and influence patient care decisions.
The event showcased a healthcare system on the edge of an AI-driven shift—where AI and robotics are already delivering real value but require careful, ethical integration. These technologies offer powerful solutions to longstanding challenges like diagnostics backlogs and workforce strain. Yet, as many speakers reminded us, they are tools to augment human care, not override it. If embedded thoughtfully, AI and robotics can lead to a more efficient, safer, and patient-focused NHS—one that leverages innovation without losing sight of trust, transparency, and clinical judgment.