AI in IR
By Beth W. Orenstein
Radiology Today
Vol. 26 No. 2 P. 10

Experts see broader applications as evidence suggests it can improve patient outcomes.

AI is a hot topic in IR, and rightly so. Evidence: AI and its increasing role in IR was the unofficial theme of the Society of Interventional Radiology (SIR) 2024 annual meeting in Salt Lake City a year ago. Bruce J. Tromberg, PhD, the director of the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health, led a plenary session in which he said the potential application of AI in all medical fields, including IR, is limitless.

In IR, Tromberg said the potential applications include everything from workflow efficiencies and automation protocols to device selection, navigation, risk assessment, disease prediction, and even drug selection.

At the meeting, Dania Daye, MD, PhD, an interventional radiologist at Massachusetts General Hospital and an associate professor at Harvard Medical School, said that with so many technologies evolving and emerging evidence that AI can positively affect patient outcomes, all physicians, including interventional radiologists, will have to use some AI tools or be left behind. “I am the first person to tell you that interventional radiologists have to embrace AI because eventually the interventional radiologist who is not going to embrace AI is not going to have the best patient outcomes,” Daye says in a followup interview with Radiology Today.

As a field, radiology is a leader in AI adoption. Data published by the FDA shows that more than three-quarters of AI-enabled medical devices that received marketing authorization up to the end of July 2023 are for applications in radiology. “But if you look at what’s been approved in AI, it is actually in diagnostic radiology and very, very few of what’s been approved is in interventional radiology,” Daye says. “IR is actually lagging behind significantly.”

Daye, Tromberg, and others believe that balance will change quickly over the next decade or so. “I call it the next frontier because I think in the next 10 years, there is going to be some increased focus on AI applications in IR,” Daye adds.

One of the reasons AI in IR lags, Daye says, is that the datasets for AI in IR are not as robust as in diagnostic radiology. “It’s putting us behind our diagnostic colleagues,” she says. Very few interventional radiologists perform the same procedures the same way and collect the same number of images or the same projections, Daye explains. “So, it is very hard to have a uniform, labeled dataset from multiple institutions to train on.” Given the variability among practitioners, such tasks as intraprocedural planning are definitely a challenge. Fortunately, Daye says, “there are a few of us who are doing work in this space and starting to recognize and come up with ways to get around these challenges.” For example, “some new computational techniques are coming out in the computer vision world where we can start to work around this particular challenge,” she says.

Establishing Connections
Tromberg notes that the National Institute of Biomedical Imaging and Bioengineering supports a multi-institutional collaborative initiative, the Medical Imaging and Data Resource Center (MIDRC), a federated data search, discovery, and analysis platform that allows researchers to query and analyze data from independent data repositories or resources related to medical imaging. The aim of MIDRC, which was initiated in late summer 2020 to help combat the global COVID-19 health emergency, is to foster machine learning innovation through data sharing for rapid and flexible collection, analysis, and dissemination of imaging and associated clinical data by providing researchers with unparalleled resources, Tromberg says.

MIDRC is hosted at the University of Chicago and is coled by the ACR, the RSNA, and the American Association of Physicists in Medicine. Although it started with COVID and CT thoracic images, it’s been pivoting to cancer and other chronic diseases. Tromberg sees MIDRC helping with algorithm development that could be widely distributed for all radiologists, whether they are interventionalists or diagnostic.

“It could really help improve the workflows and the connections between diagnostic radiology and interventional radiology,” Tromberg says.

More than a thousand users around the world and hundreds of sites are contributing images to MIDRC, Tromberg says. A few dozen AI algorithms are being developed and validated that show promise in helping diagnostic radiology and IR to come together to improve patient outcomes. Tromberg envisions AI bringing diagnostic and interventional radiologists together to find the best treatments for patients. “There’s a huge amount of work and progress being made,” he says, “linking genetic and molecular markers that come from biopsy to features and characteristics seen in imaging.”

This process—radiopathomics—uses advanced feature analysis to extract quantitative data from medical images. Radiopathomics will be used to create predictive models that use imaging characteristics to help determine the most effective treatments for specific conditions. “The interventionalist can treat the patient more effectively if they have information such as how stiff a mass is or know other genetic information about the mass beforehand,” Tromberg says. And AI may provide this information to them before they begin a treatment, he adds.

Extending Reality
Another area of AI integration in IR is extended reality (XR). Interventional radiologists can wear XR headsets during procedures. Headsets make it easier to see what’s happening in real time than looking at monitors that may be across the suite during a procedure. Another SIR presenter, Judy Wawira Gichoya, MD, MS, an associate professor in the department of radiology and imaging sciences at Emory University School of Medicine in Atlanta, sees a role for XR headsets in training/ educating interventional radiologists. If students were to wear headsets, she says, they could envision what’s happening during a procedure much better.

“Because, when you think about it— not the primary operator or the second assistant—but if you’re the fourth assistant, you can barely hear what the two performers are doing,” she explains. “There’s so much going on in the room, and sometimes we know that the further away you are, you may not have that real experience of training.”

XR headsets put everyone on equal footing, she says. Also, a set could be given to a referring doctor who comes into the suite to observe the patient’s procedure. “They are able to see what you do because sometimes, they don’t really understand it,” she says. Gichoya says she was part of a highly promising trial using XR headsets. The trial was sponsored by Immertec.

Gichoya suspects that XR headsets wouldn’t necessarily be used in routine cases, but more likely for complex ones. “For example,” she says, “if you just need to do your dialysis line, you’re not going to use it unless it’s for a very challenging case. It’s not going to be as ubiquitous in their use as people may think.”

Still, Gichoya says, it could be useful when planning a procedure. “You could use it to try to understand where you should position yourself, how it’s going to work, and all that type of thing.”

One minor obstacle to overcome with headsets is what the physician wearing it looks like. Some may be concerned about how they will look when they’re wearing it. “You can imagine, if we do most of our procedures under moderate sedation and the patient sees you with a headset, you’d have quite a lot of explaining to do,” Gichoya says, laughing.

Report Assistance
AI-assisted reporting is very common now in radiology, Gichoya says. “There’s an impression generator. We’re also seeing this combination of AI that is extracting features and coming back saying, ‘Okay, here’s the preliminary report.’” This is an area that interventional radiologists are very interested in, Gichoya says. Interventional radiologists wouldn’t have to rely on their memory as much as they do now, she notes. “You could say, ‘This is a PICC line. It’s a 21-cm tip to curve, and it ends in the right atrium, and there are no complications.’ That should be enough information to generate a report.”

At the same time, Gichoya is a bit skeptical of AI-generated summaries. “In fact,” she says, “personally, I’ve moved away from most of those unless I have created them myself. This is because there’s something about making an impression of the material you are reading which allows you to gain a basic understanding of the content. We are already seeing loss of critical thinking and reading skills when ChatGPT is used.”

Gichoya says she would be open to using AI summaries to keep up to date with new publications in scientifically reviewed peer literature and listening to these summaries on podcasts. “You could use it as a screening tool, for example, and while you’re driving to work and thinking, ‘I wonder what JVIR published today,’ it could be rather useful,” she says. Gichoya notes that it’s better when the paper’s authors have a role in the generation of the AI-assisted summaries. “The authors know the content; it’s not just autonomous,” she says. “It’s also a way for the work to be read. They don’t need to go back and record and keep attempting to record multiple times. They can interact in terms of prompting to make sure we end up with higher quality podcasts. That’s what I see when you think about AI and the bridge for understanding research.”

Improving Patient Education
Daye sees a role for AI in educating patients about the most common interventional procedures, as well. Daye and colleagues published a paper in January in the Journal of the American College of Radiology that found IR procedural instructions generated by GPT-4 can aid in improving health literacy and patient-centered care in IR by generating easily understandable explanations. The researchers reviewed 10 IR procedures and prepared prompts for GPT-4 that provided instructions for each procedure in layman’s terms for the patients. The AI-generated instructions were evaluated by four clinical physicians and nine nonclinical assessors to determine their clinical appropriateness, understandability, and clarity. The same procedures were also evaluated from patient instructions available at radiologyinfo.org and compared with GPT-generated instructions utilizing a statistical (paired t) test. Patients rated the GPT-generated instructions for the procedures, including arterial embolization, dialysis catheter placement, ultrasound-guided biopsy, and nephrostomy tube instructions to be very good to excellent.

Tromberg believes that radiology is a prime field to lead the way in AI and use it to better their practices. “Radiologists are used to complex imaging technologies and mathematical methods for extracting image information content,” he says. That’s why the FDA is approving algorithms from the radiology community at a much faster rate than every other medical field, he adds. However, one challenge with AI is that algorithms can potentially be based on biased datasets. There are many potential sources of bias, including the patient population, technologies, algorithms, and more, Tromberg notes. He also points out that MIDRC has done groundbreaking work on developing practical tools for identifying and mitigating bias while ensuring validated, trustworthy AI is used in clinical practice.

Daye often says that she expects someday—and sooner rather than later—“AI is going to allow interventional radiologists to use the right procedure for the right patient for the most optimal outcome.” But she and her colleagues are seeing evidence of the benefits of AI in study after study. “Some evidence is coming up on the effect of AI on patient mortality,” she says. “And when you see evidence that using AI helps improve patient mortality rates, it will be impossible to ignore.”

— Beth W. Orenstein of Northampton, Pennsylvania, is a freelance medical writer and regular contributor to Radiology Today.