Head Start
By Beth W. Orenstein
Radiology Today
Vol. 20 No. 2 P. 16
An algorithm could aid radiologists in predicting who will develop Alzheimer's disease.
Around 5.7 million Americans are living with Alzheimer's disease (AD); the vast majority are older than 65, but at least 200,000 are younger and have early-onset disease, according to the Alzheimer's Association. As the population ages, the numbers are expected to grow. One forecast predicts that by 2050 more than 2% of the population of the United States and 1% of the world population will have AD, a progressive disease that is likely to end in death.
Currently, there is no definitive imaging or blood test for predicting progression of AD, nor is there a cure. Despite advances in diagnostic imaging technology, patients are diagnosed with probable AD and other forms of dementia on clinical grounds; a diagnosis can't truly be confirmed until autopsy.
Some medication, which is not intended as a cure but as a way to slow disease progression, is in development. Being able to diagnose AD earlier would be a huge boon to patients and their families. Even better would be the ability to predict AD early.
"If we can find it earlier, we have an opportunity to jump in with therapeutic intervention," says Jae Ho Sohn, MD, MS, a resident physician and AI researcher in the department of radiology and biomedical imaging at the University of California San Francisco (UCSF) Medical Center. Sohn has a background in engineering and leads the Big Data in Radiology, or BDRAD, research group, a multidisciplinary team of physicians and engineers focusing on radiological data science.
At UCSF, one of Sohn's faculty mentors, Benjamin L. Franc, MD, MS, approached him about using deep learning to interpret PET brain scans acquired with fluorine-18 FDG (18F-FDG) to help distinguish between mild cognitive impairment (MCI) and AD and make earlier diagnoses of which patients are most likely to develop AD. While only a fraction of MCI patients is likely to advance to AD, there is currently no definitive marker and thus no way to know who will and who won't, Sohn says.
Pattern Recognition
Radiologists are good at finding specific biomarkers of disease on studies, but deep learning may help with more global and subtle processes, Sohn says. The idea of using deep learning to identify disease isn't new; deep learning applications are being explored in imaging for many diseases, including breast cancer detection with mammography, pulmonary nodule detection with CT, and hip osteoarthritis classification with X-rays, to name a few, Sohn says.
Typically, patients who already have AD show hypometabolism on 18F-FDG PET scans in regions of the posterior cingulate, parietotemporal cortices, and frontal lobes, while those with MCI show posterior cingulate and parietotemporal hypometabolism with variable frontal lobe involvement. Frontal lobe hypometabolism occurs later in the disease relative to the other findings. Because there is substantial overlap of findings—as both entities lie along a continuum—18F-FDG PET scans require specialists in nuclear medicine and neuroimaging to interpret using qualitative or semiquantitative readings, Sohn says.
Sohn was given a challenge: Could a deep learning algorithm be trained to predict features or patterns not evident on standard clinical review of 18F-FDG PET scans of the brain, and, if so, could it be used as an early prediction tool for AD, especially in conjunction with other biochemical and imaging tests? Sohn and colleagues devised a study to find the answer, which they determined to be yes. They published the results of their study in Radiology in November 2018.
The researchers analyzed more than 2,100 18F-FDG PET imaging studies from 1,002 patients enrolled in the Alzheimer's Disease Neuroimaging Initiative (ADNI), ADNI-2, and ADNI-Grand Opportunities. The scans were performed from May 2005 to January 2017, and many patients had multiple scans. The researchers trained the deep learning algorithm on 90% of the ADNI data set. Then they tested it on the remaining 10% of the data set.
Promising Results
Another 40 18F-FDG PET scans from UCSF from 2006 to 2016 of patients who were not enrolled in the ADNI were used as an independent test set. The average follow-up for the ADNI group was 54 months by patient and 62 months by imaging study. The average follow-up for the independent test group was 82 months in the AD group, 75 months in the MCI group, and 74 months in the AD/MCI group. None of the patients in either set had been diagnosed with dementia of any type other than AD. A clinical diagnosis of AD was used as the ground truth.
Three board-certified nuclear medicine physicians with a combined 55 years' experience performed independent interpretation of the 40 18F-FDG PET studies from the independent test set. Their interpretations consisted of qualitative interpretation of the PET images in axial, sagittal, and coronal planes and semiquantitative regional metabolic analysis using a commercially available clinical neuroanalysis software package from MIM Software. The readers saw only the 18F-FDG PET imaging data, name, age, and date of the scan. In cases where the qualitative interpretations disagreed, two additional radiologists with a total of 14 years' experience provided input. The final clinical diagnosis was that of the majority of the five readers.
The trained deep learning model was tested on two data sets: 10% of the ADNI scans and the independent test set. "We trained the algorithm by giving it a couple hundred cases that progressed to AD and a couple hundred that did not," Sohn says. "The machine learning model was able to teach itself to see which cases were likely to progress to Alzheimer's disease, and it did it better than humans on [receiver operating characteristic] space," Sohn says. All of the scans were of patients who had something "a little bit off" but were not yet clinically diagnosed with AD. "We were very pleased with the algorithm's performance," Sohn says. "It was able to detect every single case that advanced to Alzheimer's disease on the UCSF data set." However, he adds, it is important to remember that it was on a small data set.
More importantly, the machine learning model was able to predict which patients would progress from MCI to AD six years earlier than a clinical diagnosis of AD could be made, he says. That's important, Sohn says, because it could mean patients can be started on treatments to slow the progression of their disease when it's still in its early stages. He is hopeful that after more large-scale external validation on a multi-institutional data set and model calibration, the algorithm may be integrated into clinical workflow and serve "as an important decision support tool to aid radiology readers and clinicians with early prediction of AD from 18F-FDG PET imaging studies."
Youngho Seo, PhD, a professor and director of nuclear imaging physics in the department of radiology and biomedical imaging at UCSF, says it's significant that the study was done using 18F-FDG PET imaging because it's an inexpensive nuclear tracer compared with others and can be done at most imaging facilities. FDG PET is used for other indications with FDA approval, and patients may be able to undergo PET imaging to look for AD if their physicians expect it as an add on, he says.
Broadening the Scope
Carolyn C. Meltzer, MD, FACR, a professor and chair of the department of radiology and imaging sciences and associate dean for research at Emory University School of Medicine in Atlanta, agrees that the UCSF study is "very promising." The idea of looking at large numbers of imaging studies where the outcome is known for common patterns that can be detected and applied to a prognosis is a wonderful idea, she says. However, she warns, "it's still very early in artificial intelligence" and the researchers, while deservedly optimistic, should proceed with caution.
Fifteen years ago, Meltzer says, she did a machine learning study that compared MRI brain scans of patients with schizophrenia and normal controls. Similar to AD, she says, "there is no diagnostic pattern on MRI that is specific for schizophrenia." Using a machine learning approach, she and her team were able to "fully separate the brains of schizophrenics and the brains of normal individuals" and replicate the results in a small new data set. One concern was that the algorithm was opaque, and it was difficult to understand whether the elements used to distinguish the subject groups both had a physiologic basis and were robust across scanners and institutions.
If researchers can come up with "robust deep learning algorithms" that can be applied to different data sets, it would be extremely valuable, Meltzer says. As for the UCSF study, she says, "we need to interpret it with caution." It's a start, she says, "but we need to get at these more complex patterns that go across a disease and multiple brain scans, and, when we do, it will be very exciting."
The accumulation of beta-amyloid and tau proteins—abnormal protein clumps and tangles in the brain—are markers specific to AD. The researchers at UCSF used 18F-FDG PET for their initial study but are looking at other tracers that are more specific for AD and amyloid and tau plaques and proteins. "We are looking at developing machine learning algorithms that target these molecular markers of AD more specifically and on those types of PET scans to get better performance," Sohn says.
The researchers started their project in late 2017 and are pleased with the progress they made in the first year, Sohn says. It's a great start, he says, and a good jumping-off point for more research along these lines. He adds that future plans also include bigger clinical trials with other data sets, as the small data set was one major limitation of their study.
"It's a really exciting journey for us and potentially for patients and families of loved ones with Alzheimer's disease," Sohn says. He is very excited and believes "there is a lot of potential for artificial intelligence and deep learning to improve radiologists' accuracy, not just in Alzheimer's disease but in many different types of scans across the board."
— Beth W. Orenstein, of Northampton, Pennsylvania, is a freelance medical writer and regular contributor to Radiology Today.