AI provides prognostic data for cancer patients
Although it has long been known that predicting outcomes in cancer patients requires consideration of various characteristics, including patient history, genes, and disease pathology, physicians struggle to combine this data when making decisions about patient care.
A proof-of-concept model that uses artificial intelligence (AI) to aggregate multiple forms of data from different sources to predict patient outcomes for 14 distinct types of cancer is revealed in a new study by Mahmood researchers Brigham and Women’s Hospital Lab. The findings are published in cancer cell.
To diagnose and prognose various types of cancer, experts rely on various sources of information, including patient history, pathology, and genetic sequencing.
Although they can use this information to predict outcomes due to current technology, manually integrating data from many sources is difficult and experts often find themselves making choices.
Experts analyze lots of evidence to predict how well a patient may be doing. These early reviews become the basis for decision-making regarding enrollment in a clinical trial or specific treatment regimens. But that means that this multimodal prediction happens at the expert level. We are trying to solve the problem electronically.
Faisal Mahmood, Assistant Professor, Division of Computational Pathology, Brigham and Women’s Hospital
Mahmood is an associate member of the Cancer Program at the Broad Institute of Harvard and MIT.
Mahmood and his colleagues discovered a way to computationally combine multiple types of diagnostic data to provide more accurate outcome predictions through the use of these new AI models.
AI models show prognostic decision-making capabilities while revealing the predictive underpinnings of variables used to predict patient risk, a quality that could be exploited to find new biomarkers.
The Cancer Genome Atlas (TCGA), a freely accessible resource that contains information on many types of cancer, was used by the researchers to create the models. They then created a deep learning-based multimodal algorithm that can extract prognostic data from a variety of data sources.
They were able to combine the technologies into a single integrated entity that offers important prognostic information by first developing separate models for histology and genetic data.
Finally, the scientists evaluated the performance of the model by providing it with datasets from 14 different cancer types as well as histological and genetic information about the patient. The results showed that the models produced are able to predict patient outcomes more accurately than those using a single data source.
This study shows that it is possible to predict disease outcomes by integrating multiple forms of clinically informed data using AI. Mahmood says these models could help researchers find biomarkers that incorporate various clinical aspects and better understand the kind of data they need to identify distinct types of cancer.
The interest of each diagnostic method for certain types of cancer and the advantages of combining numerous modalities were also quantitatively assessed by the researchers.
AI models can also reveal pathological and genetic characteristics that influence prognostic predictions. The scientists found that the models used patients’ immune responses as a predictive sign without prompting.
This is an important finding since previous research has shown that patients with malignancies that evoke higher immune responses generally have better outcomes.
Although this proof-of-concept model demonstrates a newly discovered use of AI in cancer treatment, this research represents only the beginning of the clinical application of these models.
Larger datasets need to be incorporated and these models need to be validated on several independent test cohorts before being used in the clinical setting. Mahmood plans to integrate more patient data in the future, including radiology scans, family history and electronic medical records, and eventually apply the model to clinical trials.
Mahmood added: “This work paves the way for larger studies of AI in healthcare that combine data from multiple sources. In a broader sense, our findings underscore the need for building computational pathology prognosis models with much larger datasets and downstream clinical trials to establish their utility.”
Mahmood and co-author Richard Chen are the inventors of a patent application for multimodal data fusion based on deep learning.
National Science Foundation (NSF) Graduate Fellowship, National Library of Medicine (NLM) Biomedical Informatics and Data Science Research Training Program (T15LM00709), National Human Genome Research Institute (NHGRI) Ruth L. Kirschstein National Research Service Award Training Fellowship in bioinformatics the study was funded.
Chen, RJ, et al. (2022) Pan-cancer integrative histological-genomic analysis via multimodal deep learning. Cancer cell. doi:10.1016/j.ccell.2022.07.004.