Reality Check
By Dave Yeager
Radiology Today
Vol. 18 No. 12 P. 12
Mixed reality is making inroads in radiology.
Although augmented reality (AR) had a shining moment in the spotlight last year with the release of Pokémon Go, AR and its sibling virtual reality (VR) have been part of popular culture for a long time, as anyone who remembers the movie The Lawnmower Man or the Max Headroom TV series can attest. During that time, less publicized though potentially more transformative uses of AR and VR have been quietly working their way into health care. Although still in the early stages, the technologies are poised to usher in a paradigm shift in medical image visualization within the next few years.
"What we are talking about is a very rapidly, dynamically changing field where there are some hardware limitations that exist today, but, in the near future, we are on the cusp of major improvements in performance, resolution, battery life, and the weight of heads-up displays [that are worn by physicians]," says Eliot Siegel, MD, FACR, FSIIM, a professor and vice chair of information systems at the University of Maryland School of Medicine, an adjunct professor of computer science at the University of Maryland, Baltimore County, and chief of radiology and nuclear medicine at the VA Maryland Health Care System in Baltimore.
AR consists of computer-generated images that are overlaid on a view of the user's natural environment. VR is a completely computer-generated representation of the viewer's environment. Collectively, they are sometimes referred to as mixed reality. Siegel and Vikash Gupta, MD, a fourth-year radiology resident at the University of Maryland, see many potential uses for mixed reality. They have been working with several vendors to develop a mixed reality visualization engine that can be used with Microsoft's HoloLens, and they plan to demonstrate some of its possibilities at RSNA 2017. They will be at the Virtual Reality Showcase.
Leveling Up
Siegel and Gupta have identified three levels of mixed reality. Level one is seeing what another user sees. For example, a consultant remotely viewing what a surgeon sees in surgery would be a level one function. This type of use can be applied to various educational situations.
"The idea of being able to see through someone else's eyes is particularly exciting, and we've been looking at how we can use this technology for interventional radiology education," Gupta says. "As has been reported in many studies, mixed reality is great as a tool to teach people how to work within procedures in the interventional suite. So we're looking at how we can use it to get our junior residents acclimated to the IR suite and show them how image-guided interventions by radiologists work, to help them become more accustomed to IR when they actually get into the clinical setting."
The second mixed reality level is the ability to create virtual environments that allow users to replace physical monitors and displays with virtual ones. For example, if an interventionalist has a question about a patient's imaging or health data, he or she currently has to leave the surgery theater and go to the control room to access the information, then rescrub before reentering surgery. A sterile head-mounted display would allow the interventionalist to view the images or data on the spot, with the ergonomic benefit of not having to turn his or her head to look at a monitor. This type of technology is what Siegel and Gupta's RSNA demonstration will address.
The third level of mixed reality is the ability to register patients' medical scans with their physical bodies in real time. Siegel says because patients move, reliably synching 3D holograms of their internal anatomy in the space where it actually exists is a significant challenge that requires complex registration and intensive computations. Siegel and Gupta and their colleagues have been investigating ways to do this for bone and lung lesions, and several companies have software that addresses the problem of image registration in different ways.
20 Minutes Into the Future
One company that is exploring the use of mixed reality in surgery is EchoPixel. CEO and founder Sergio Aguirre, MSc, says physicians are using his company's True 3D software for surgical planning of cardiology, IR, and interventional cardiology procedures. When surgeons request a surgical plan, the radiologists can create a Key Bookmark Scene DICOM file that contains life-sized holographic organs and tissues, a traced surgical approach, and identified access points. The file is then pushed to the operating room, where the surgeon can interact with patient-specific anatomy using glasses and a stylus.
Aguirre says IR procedures are being handled slightly differently. At Lahey Hospital & Medical Center in Burlington, Massachusetts, In Sup Choi, MD, is using True 3D in his catheter lab, where he treats brain aneurysms. When a patient is on the operating room table, a cone beam CT image is generated by a C-arm to determine the brain's vasculature, assess the size of the aneurysm, and calculate the optimal C-arm angle. The fluoroscopic image is then overlaid on the patient. Doctors at Brigham and Women's Hospital in Boston and Stanford University Medical Center in California are using the software in a similar way to better visualize liver tumor feeder vessels so they can deliver microsphere treatments. Although there is a lag time of a few seconds in the software, Aguirre says he expects the first real-time installation of True 3D to be done sometime in 2018.
Other areas where Aguirre has seen interest in the software is with ultrasound and Doppler images. He says there has been particular interest in visualizing structural heart procedures and fetal hearts for surgery. He believes the software is useful not only for direct care but also in helping physicians communicate with their patients.
"As you give clinicians really precise anatomical representations of their patients, it's helping them to provide better care," Aguirre says. "Surgeons are using patient-specific data not only for informed consent before surgery but also postsurgery to show patients why they need to comply with treatment plans."
Another company that provides surgical overlays is Cydar Medical. Cydar's technology, which is 510(k) and CE Mark approved, overlays a patient's CT scan on top of live fluoroscopy to assist with endovascular surgery. James Gough, Cydar's chief of business development, says the cloud-based system matches the images in the cloud and returns them within four seconds.
Gough says Cydar decided to use cloud architecture because of its superior processing power; most hospital servers don't have enough power to generate these images at speeds that are useful for surgery. The software uses machine vision algorithms to identify and match vertebral anatomy and is compatible with any CT or fluoroscopy system, fixed or mobile.
To date, more than 500 surgeries have been performed in the United States and United Kingdom with the Cydar system. Although it is currently specialized for vascular surgery, Gough says the company plans to expand its use, medically and geographically.
"The core technology lends itself to other parts of the body, and we do intend to use it to overlay other types of cases, such as orthopedics and oncology. We see huge opportunities for increasing advanced visualization in other areas of medicine," Gough says. "Also, we're uncoupling the hardware from the software; we're agnostic as to what hardware we work with. We are trying to enable fusion imaging for the masses. This is technology that can be used in all sorts of medical centers, including in the developing world."
Novarad is also developing mixed reality technology. Steve Cvetko, PhD, Novarad's director of research and development, has developed an app that works with the HoloLens and projects patients' medical images onto a physician's field of view during surgery. The images are registered with the patient's anatomy. Cvetko says the physician can use finger gestures to manipulate the medical images and voice commands to turn them on or off. 3D landmarks can be used to identify the location and depth of areas of interest prior to surgery in the same ways that annotations are used on medical images.
In July, Novarad CEO Wendell Gibby, MD, used the HoloLens app to perform a percutaneous discectomy, a needle-based interventional procedure that is designed to decompress spinal discs without cutting a patient open. Surgeons usually use fluoroscopy to help visualize needle placement, but Gibby says that technology has some limitations. With the app, he says he was able to see skin, muscle, and bone as a hologram on the patient, helping him to optimize the injection. The patient had two disc herniations, and Gibby says the patient experienced significant symptom relief. Because the app is not yet 510(k) approved, the surgery was backed up by conventional imaging.
Although the HoloLens is a little heavy—like heavy leaded glasses, Gibby says—and the lights in the surgical suite had to be turned down and up manually, Gibby thinks that this technology opens up a new era in surgical assistance. He was especially pleased that he was able to use the HoloLens to make the entry mark on the patient's skin, based on preoperative imaging, and the fluoroscopic image was within 3 mm of the mark.
"This is some of the coolest stuff I've seen in my career," Gibby says. "This tops them all. It just blows your mind."
Gibby believes that apps such as Novarad's can be a significant step forward in allowing surgeons and radiologists to collaborate more and reduce medical errors. How long it takes mixed reality to achieve this ideal is an open question, but there seems to be little doubt that it will.
"If you go to electronics shows, you'll see more heads-up data that are being displayed in cars because there are so many distractions now that are taking people's eyes away from the road and being associated with an increasing number of accidents. Being able to drive and see the detail on the road, keep your eyes on the road, and get information that you need is going to make us better, safer drivers, and I think the same is going to be true for interventional radiologists and surgeons," Siegel says. "They'll be better able to keep their eyes on the patients and what they're doing in the field of view, without distractions such as alarms or hunting for images and patient data."
— Dave Yeager is the editor of Radiology Today.