Australia’s Imaging Giant I-MED Under Investigation for Alleged Data Breach in AI Training

Written by Mireia Cuevas Crespo (Reporter)

The Office of the Australian Information Commissioner (OAIC) has initiated an investigation into allegations that I-MED Radiology Network has been using private patient data to train a radiology AI tool without prior consent. I-MED, the largest diagnostic imaging provider in Australia, is accused of sharing chest X-ray data with health technology firm Harrison.ai without previously informing patients about such procedures. 

The Need For Reliable Data

AI-powered devices use existing data to develop. Effective they rely on massive datasets to be able to understand patterns and generate reliable medical conclusions.

That is why insufficient or inaccurate data can lead to erroneous AI-driven predictions. To avoid this, medical organisations must ensure their AI initiatives can access high-quality data to train their models efficiently.

However, relying on real patient information for AI training can often raise data privacy concerns if patients have not adequately been informed about what specific use is being given to their medical records.

I-MED: An Indicator of Underlying Patient Privacy Issues

I-MED Radiology Network is Australia’s biggest provider of imaging services, with over 250 facilities nationwide.

In addition to imaging diagnostics, I-MED also operates i-TeleRAD, a premier teleradiology service, and Annalise.ai, which specializes in AI and medical imaging. I-MED performs about 5 million operations annually.

The fact that a medical imaging giant such as I-MED is facing AI-related privacy concerns highlights wider difficulties with the handling of patient data in the healthcare sector. Moreover, given AI remains early in its integration into modern healthcare, I-MED’s activities may create critical precedents for data protection standards across the sector.

According to the Australian Privacy Principles, personal information may be disclosed for its intended purpose or a secondary purpose that would be reasonably expected.

In the case of I-MED and Harrison.ai, it is uncertain whether training AI on medical data constitutes a reasonable expectation for secondary use.

What Happens Next?

According to a spokesperson, the OAIC is “making preliminary inquiries with I-MED Radiology Network to ensure it is meeting its obligations under the Australian privacy principles.”

In response, I-MED has that they “continuously review our processes and policies to address evolving legal, governance and community expectations,” and that the firm is “committed to complying with privacy legislation as it continues to evolve.”

On the other hand, according to reports, Harrison.ai said that patient consent and privacy were “not matters for Harrison to respond to.” The AI firm claimed that I-MED was responsible for ensuring compliance with privacy standards and that the data used for training followed all legal requirements.

However, the investigation of I-MED raises significant issues that should be considered by international organisations. Given the widespread application of AI in modern medical procedures including cancer treatment, vaccine production and depression treatment, there is an urgent need for robust regulation concerning patient data disclosure.

By adapting current governance to AI developments and monitoring evolving data protection regulations, current healthcare systems could better safeguard patient privacy and uphold ethical standards, ultimately fostering trust in AI-driven medical solutions.