Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports Journal Article


Authors: Elmarakeby, H. A.; Trukhanov, P. S.; Arroyo, V. M.; Riaz, I. B.; Schrag, D.; Van Allen, E. M.; Kehl, K. L.
Article Title: Empirical evaluation of language modeling to ascertain cancer outcomes from clinical text reports
Abstract: Background: Longitudinal data on key cancer outcomes for clinical research, such as response to treatment and disease progression, are not captured in standard cancer registry reporting. Manual extraction of such outcomes from unstructured electronic health records is a slow, resource-intensive process. Natural language processing (NLP) methods can accelerate outcome annotation, but they require substantial labeled data. Transfer learning based on language modeling, particularly using the Transformer architecture, has achieved improvements in NLP performance. However, there has been no systematic evaluation of NLP model training strategies on the extraction of cancer outcomes from unstructured text. Results: We evaluated the performance of nine NLP models at the two tasks of identifying cancer response and cancer progression within imaging reports at a single academic center among patients with non-small cell lung cancer. We trained the classification models under different conditions, including training sample size, classification architecture, and language model pre-training. The training involved a labeled dataset of 14,218 imaging reports for 1112 patients with lung cancer. A subset of models was based on a pre-trained language model, DFCI-ImagingBERT, created by further pre-training a BERT-based model using an unlabeled dataset of 662,579 reports from 27,483 patients with cancer from our center. A classifier based on our DFCI-ImagingBERT, trained on more than 200 patients, achieved the best results in most experiments; however, these results were marginally better than simpler “bag of words” or convolutional neural network models. Conclusion: When developing AI models to extract outcomes from imaging reports for clinical cancer research, if computational resources are plentiful but labeled training data are limited, large language models can be used for zero- or few-shot learning to achieve reasonable performance. When computational resources are more limited but labeled training data are readily available, even simple machine learning architectures can achieve good performance for such tasks. © 2023, BioMed Central Ltd., part of Springer Nature.
Keywords: carcinoma, non-small-cell lung; lung neoplasms; diagnostic imaging; lung tumor; disease progression; clinical research; disease exacerbation; biological organs; diseases; performance; language processing; non small cell lung cancer; natural language processing systems; clinical outcome; electronic health records; natural language processing; learning systems; information extraction; clinical outcomes; cancer; humans; human; learning algorithms; neural network models; electronic health record; network architecture; convolutional neural networks; transfer learning; natural languages; transformer-based language models; electric power supplies; computational linguistics; modeling languages; language model; processing model; transformer-based language model; power supply
Journal Title: BMC Bioinformatics
Volume: 24
ISSN: 1471-2105
Publisher: Biomed Central Ltd  
Date Published: 2023-01-01
Start Page: 328
Language: English
DOI: 10.1186/s12859-023-05439-1
PUBMED: 37658330
PROVIDER: scopus
PMCID: PMC10474750
DOI/URL:
Notes: Article -- Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors
  1. Deborah Schrag
    235 Schrag