AI System Revolutionizes Image Analysis for Predicting Outcomes
A new artificial intelligence (AI)-based system is showing promise in accurately detecting and predicting outcomes from images taken over time. Developed by investigators at Weill Cornell Medicine, Cornell’s Ithaca campus, and Cornell Tech, the system, called LILAC (Learning-based Inference of Longitudinal imAge Changes), uses machine learning to analyze “longitudinal” image series, which are image sequences captured over a period of time.

The versatility of LILAC could have major implications across diverse medical and scientific applications, according to the study published in the Proceedings of the National Academy of Sciences on February 20. The system demonstrates a high degree of sensitivity and flexibility in identifying subtle changes and predicting outcomes like cognitive scores from brain scans.
“This new tool will allow us to detect and quantify clinically relevant changes over time in ways that weren’t possible before, and its flexibility means that it can be applied off-the-shelf to virtually any longitudinal imaging dataset,” said study senior author Mert Sabuncu, vice chair of research and professor of electrical engineering in radiology at Weill Cornell Medicine and professor in the School of Electrical and Computer Engineering at Cornell’s Ithaca campus and at Cornell Tech.
Overcoming Traditional Challenges in Image Analysis
Traditional approaches to analyzing such longitudinal data often require significant customization and pre-processing. Researchers would typically have to pre-process the image data, focusing on specific areas, correcting for varying angles, and adjusting for differences in sizing before initiating the main analysis. LILAC, however, is designed to be more flexible, automatically performing these corrections and identifying relevant changes.
The study’s first author, Heejong Kim, an instructor of artificial intelligence in radiology at Weill Cornell Medicine and a member of the Sabuncu Laboratory, explained how LILAC streamlines the process:
“This enables LILAC to be useful not just across different imaging contexts but also in situations where you aren’t sure what kind of change to expect,” Kim stated.
Demonstrating LILAC’s Capabilities
Researchers demonstrated LILAC’s capabilities through several proof-of-concept studies. In one, the system was trained on hundreds of microscope images of in-vitro-fertilized embryos. It then successfully identified, with approximately 99% accuracy, which image from a pair was taken earlier in the developmental sequence.
Further tests showed that the AI could accurately order image pairs of healing tissue and detect differences in healing rates between treated and untreated tissue. It also accurately predicted time intervals between brain MRIs of healthy older adults and cognitive scores of patients with mild cognitive impairment, with reduced error when compared to baseline methods.
The AI’s ability to pinpoint the most relevant image features for detecting changes is expected to provide new clinical and scientific insights.
“We expect this tool to be useful especially in cases where we lack knowledge about the process being studied, and where there is a lot of variability across individuals,” Sabuncu noted.
The researchers are now planning to test LILAC in a real-world setting, using it to predict treatment responses from MRI scans of prostate cancer patients. The source code for LILAC is available for free use.
This research received support through grants from the National Cancer Institute and the National Institute on Aging, both part of the National Institutes of Health, along with data from OASIS-3: Longitudinal Multimodal Neuroimaging for the aging brain experiments.
Jim Schnabel is a freelance writer for Weill Cornell Medicine.