Reproducible reporting of the collection and evaluation of annotations for artificial intelligence models Journal Article


Authors: Elfer, K.; Gardecki, E.; Garcia, V.; Ly, A.; Hytopoulos, E.; Wen, S.; Hanna, M. G.; Peeters, D. J. E.; Saltz, J.; Ehinger, A.; Dudgeon, S. N.; Li, X.; Blenman, K. R. M.; Chen, W.; Green, U.; Birmingham, R.; Pan, T.; Lennerz, J. K.; Salgado, R.; Gallas, B. D.
Article Title: Reproducible reporting of the collection and evaluation of annotations for artificial intelligence models
Abstract: This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI). © 2024
Keywords: adult; aged; nuclear magnetic resonance imaging; diagnostic accuracy; breast cancer; image analysis; diagnostic imaging; information processing; intervention study; patient information; training; medical education; artificial intelligence; image processing; data collection method; digital pathology; data set; diagnostic test accuracy study; invasive ductal carcinoma; intelligence; workflow; reference standard; human; male; female; article; artificial intelligence validation; data availability; annotation study; reproducible research
Journal Title: Modern Pathology
Volume: 37
Issue: 4
ISSN: 0893-3952
Publisher: Nature Research  
Date Published: 2024-04-01
Start Page: 100439
Language: English
DOI: 10.1016/j.modpat.2024.100439
PUBMED: 38286221
PROVIDER: scopus
DOI/URL:
Notes: Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors
  1. Matthew George Hanna
    102 Hanna