May 2016

Paging HAL: What Will Happen When Artificial Intelligence Comes to Radiology?
By Dave Yeager
Radiology Today
Vol. 17 No. 5 P. 12

The myth of Hephaestus' golden handmaidens illustrates mankind's centuries-long fascination with artificial intelligence (AI). The god of the forge created his handmaidens, who could talk and perform even the most difficult tasks, to assist him in his labors, and many people have since speculated about the possible uses of AI and the forms it might take. More recently, noted scientists and futurists, such as Ray Kurzweil; Stephen Hawking, CH, CBE, FRS, FRSA; and Elon Musk, have discussed, debated, and dissected the possibilities and pitfalls of AI. With many AI advances coming in the past few years, some people are beginning to wonder whether it will eventually replace radiologists.

"There have been so many strides made in pattern recognition and speech recognition. We've gone from debates about whether the computer would ever be able to handle speech recognition, which it can do surprisingly well now, to debates about whether a computer could ever beat a grandmaster or the world champion human at chess or the even more challenging board game Go, and it happened," says Eliot L. Siegel, MD, FSIIM, FACR, a professor and vice chair of research informatics at the University of Maryland School of Medicine, an adjunct professor of computer science at the University of Maryland, and chief of radiology and nuclear medicine at the VA Maryland Health Care System in Baltimore. "So many of these tasks that were once assumed to require human thinking, including interpreting image information, are now falling by the wayside because of advances in AI. There has also recently been an incredible increase, outside of the medical imaging world, in [large and small organizations] looking at extracting information from images."

Siegel, who participated in one of the first research studies that used IBM's Jeopardy! DeepQA system for medical analyses, says people often ask him what AI means for radiology. At the Society for Imaging Informatics in Medicine 2016 Annual Meeting, he will deliver the closing Dwyer Lecture and an accompanying session on the topic. The session will look at AI's history and current applications and attempt to separate hype from reality. Later this year at RSNA 2016, Siegel will debate Bradley J. Erickson, MD, PhD, a professor of radiology at the Mayo Clinic in Rochester, Minnesota, about whether AI will replace radiologists within the next 25 years. He hasn't yet decided which side he'll argue, but one thing seems clear: Whatever preconceived notions people may have about it, AI is currently sitting on radiology's doorstep.

Shall We Play a Game?
People often associate AI with self-awareness. Popular movies, such as 1968's 2001: A Space Odyssey, 1982's Blade Runner, and 2015's Ex Machina, have contributed to this conception. In reality, we may be decades away from machines that recognize themselves, but another important aspect of AI is the ability to learn; this is often referred to as machine learning.

In this regard, computing has come a long way. People may remember IBM's Deep Blue, the computer that defeated chess grandmaster Garry Kasparov in 1997. Although that was an impressive feat, a newer supercomputer has done something even more impressive: In March, Google DeepMind's AlphaGo defeated Lee Sedol, a 9th level grandmaster and one of the world's top Go players, in a best of five match. The final score was 4 to 1.

Why is AlphaGo's accomplishment more impressive than Deep Blue's? Chess has more rules and fewer possible move combinations than Go. Because of these constraints, Deep Blue was able to analyze millions of potential combinations and their outcomes, a tactic known as brute force calculation. Go's sheer number of possible move combinations makes it impossible for any current generation of computer to analyze every possible scenario. Along with strategic thinking, Go players often rely on experience and intuition, which is why many people assumed that it would take many more years before a machine could defeat a human.

To solve the problem of Go's variability, AlphaGo's programmers used a programming method called deep learning. Deep learning relies on "neural networks" that are more similar to human thought processes than traditional computing, according to a 2016 article published by Silver et al in the journal Nature. Rather than attempting to map out every possible move combination, deep learning uses a sample of data—large but finite—and, with some fine-tuning by humans, draws conclusions from that sample. In the case of AlphaGo, the computer was then able to simulate millions of games and incorporate that knowledge into its decision making.

Radiology's Handmaidens
Many people have suggested that bringing this type of machine learning to medical care could be helpful for identifying critical medical conditions sooner; this would potentially allow for earlier intervention and better outcomes. Which brings us back to radiology. Because humans vary, radiological images present a nearly endless variety of medical conditions, which radiologists need to identify correctly, based on strategic thinking, experience, and intuition. But what if machine learning algorithms could be applied to radiological images? In some cases, they can. Tools that use AI are beginning to find their way to the marketplace.

Enlitic is one of the companies using deep learning to enhance radiology tasks. They have developed a lung nodule detector that they claim is able to achieve positive predictive values that are 50% higher than those of a radiologist. As the detection model analyzes images, it learns from those images. It not only finds lung nodules, it also provides a probability score for malignancy.

Enlitic is now conducting a trial on a model that detects wrist fractures. Igor Barani, MD, Enlitic's CMO, says as many as 30% to 40% of such fractures can be missed; this can result in improper healing and chronic pain. The model is being trained to find the fractures on X-ray images and overlay a heat map to highlight their location within a conventional PACS viewer. To test the technology's effectiveness, the trial presents multiple radiologists with images that are either annotated with heat maps or not. The radiologists evaluate each image twice, in random order, to check accuracy.

"We have some very promising early results," Barani says. "We are actually broadening the scope of this project, beyond just fracture detection, with the specific goal of rolling out a clinical application in the summer."

The clinical application will encompass X-ray, CT, and possibly MRI and search for a variety of medical conditions. Enlitic is working to incorporate ACR guidelines into it. They are also exploring treatment planning and treatment recommendation applications. Barani says the long-term goal is to build a neural network that can evaluate the entire body and detect any pathologic state, as well as variations of normal anatomy, while integrating patient-specific factors; genomic, clinical, and imaging data; and other data that can assist physicians in making informed treatment decisions.

Medical providers are looking into the possibilities of deep learning as well. In 2015, teleradiology provider vRad partnered with AI software company MetaMind to identify key radiology elements associated with critical medical conditions. Because emergency departments (EDs) constitute a large part of vRad's business, the first tool from this partnership is an algorithm that identifies the presence of intracranial hemorrhage (ICH), which is often seen in ED patients, stemming from stroke and trauma conditions. ICH can cause rapid health declines if not treated promptly and with the correct protocols.

Shannon Werb, CIO and chief operating officer of vRad, says this type of deep learning tool requires large, highly targeted data sets. vRad interprets about 4,000 head CT scans daily; those images are compared in real time with baseline data in the algorithm to check for ICH. In March, vRad put the algorithm into a beta phase that will allow it to collect data to demonstrate outcomes, leading up to a filing with the FDA. The company has filed a patent for the use of deep learning technologies in a telemedicine platform and plans to file its updated 510(k) clearance submission for its PACS this summer.

Now that vRad has built the algorithm, Werb says, it can be adapted to other uses. vRad is working on algorithms to identify additional critical conditions, such as pulmonary embolisms and aortic tears, which are also often seen first in EDs. In the future, Werb believes it will be possible to enhance physician workflows by escalating cases with critical findings to the front of the radiologist's worklist, automatically alerting the radiologist and, based on the radiologist's dictation of the case, autodialing the ordering physician to facilitate the necessary critical findings conversation—within minutes of the diagnosis and without any manual intervention by the radiologist.

"I believe we're going to start seeing deep learning penetration of the market, in a significant way, soon, but doing smaller tasks that enhance the ability of physicians to focus on delivering diagnoses faster," Werb says. "For example, accelerating workflows for study escalation and critical findings will happen this year. Eventually, these initial successes will help us deliver more complex applications. If, for example, we could integrate the ICH algorithm in a CT scanner so that it could be used when a patient is being scanned, the CT scanner could recognize the potential presence of ICH, and a doctor could look at the images immediately. Things like that will start happening in the next couple of years. Looking some years down the road, we may eventually be able to point the doctor's eyeballs to relevant findings or provide preliminary reports for physician review."

Another company that expects to see more AI-based products on the market is Merge Healthcare, which was acquired by IBM in October 2015 and will become part of the IBM Watson Health unit. Later this year, IBM/Merge will release an EMR summarization service that can condense hundreds of pages of notes for easier reading by diagnosing physicians and a retrospective audit service that can help health systems look at variability of care or identify patients who are at risk for certain conditions. At the Healthcare Information and Management Systems Society 2016 Annual Conference and Exhibition this year, they demonstrated prototype tools such an iPhone app that can identify melanoma with accuracy high in the 90th percentile, a platform that can look at a heart image and determine whether a patient is a good candidate for an aortic valve replacement, and a tool that can automatically detect breast masses and document where and how big they are. Steve Tolle, chief strategy officer of IBM Watson Health Imaging, thinks these products will be available by 2017 or 2018.

By combining the ability of IBM's Watson supercomputer to recognize images and map organs with Merge's database of medical images, IBM/Merge is working on an image processing platform that can map the entire body. The platform requires approximately 5,000 to 10,000 studies per modality and per organ to train it. The IBM/Merge team is currently focusing on identifying conditions such as breast cancer, lung cancer, and COPD. Tolle says it's too soon to say when the platform will receive FDA approval.

Help Wanted
Radiological images are stored in DICOM format, and the volume of images is expanding continually. These factors—structured data and large databases—are two reasons that companies have chosen radiology as the starting point of their clinical AI efforts. Ironically, one of the factors that has fueled AI forays in radiology is also somewhat of a limiting factor: Although the data exist, data silos make aggregating the necessary files from radiology, pathology, EHRs, and other sources time consuming.

"I think the primary limitation is the amount of time we need to gather the curated data," Tolle says. "When we look at a curated data set, it's similar to what an academic medical center uses as a teaching file to train residents. For example, you want to know that the image is a mammogram where breast cancer was identified. We then did a biopsy, and the pathology [confirmed it]. And here's the patient's medical history: They have a sister or an aunt or a mother who has this gene mutation and, therefore, the patient is at risk for this condition. So we're taking all of that data per image series and using that to train the machine. It does take some time to do the work."

Obtaining the necessary data requires cooperation across the industry. As a large medical provider, vRad was able to use their own data, but they partnered with MetaMind to provide the AI software. Enlitic used publicly available CT data from the National Lung Screening Trial to develop its lung nodule detector but has since partnered with Capitol Health, one of the fastest growing radiology practices in Australia, to provide the data that Enlitic needs to improve its models and the opportunities to test them in clinical practice. Likewise, IBM's acquisition of Merge Healthcare gave it access to a huge medical image database that can be used to train Watson.

Broader efforts to bring AI to radiology and health care will require still more data sharing, Siegel says. He notes that medical data are protected in many ways due to privacy issues, and medical providers typically don't have easy access to this information. For example, patients are generally asked to opt in to have their data used for medical research. Siegel suggests that a more effective method may be to change the policy so health care organizations would give them the option to opt out instead. In addition, some health care organizations still view patient data as proprietary information or fear liabilities associated with sharing it.

"There are many pockets of localized information, [and we need] larger libraries of data that cut across multiple systems that we would have permission to mine. But the culture, in this very early era of electronic medical records, doesn't really have a provision that allows us to create this concept of data that cuts across many different sources," Siegel says. "Clearly, we want privacy and security, but I think privacy and security for both clinical and research purposes, while critically important, have been overblown and overinterpreted to the point where hospitals are afraid to share data, even when it's in the patient's best clinical interest, and certainly for research purposes. I'd love to see some sort of legislation that makes it more clear, more apparent, and easier for medical providers to feel as though they are able to share that data securely and privately to advance medicine."

Streamlined access to medical data would speed AI's progress. And progress is needed because AI-based tools have the potential to help radiologists perform more efficiently. Despite any misgivings that people may have about the implications of AI technology, radiologists aren't going away any time soon.

"The World Economic Forum has estimated that it will take at least 300 years to train enough medical experts to meet the needs of the developing world, so the idea that we could have too many medical experts seems very unlikely," says Jeremy Howard, founder of Enlitic. "The important thing here is that the radiologist plus the tool is much more accurate than the tool alone. Yes, the tool can be more accurate than a radiologist alone, but the combination is more accurate than either."

— Dave Yeager is the editor of Radiology Today.