Deep learning for variational multimodality tumor segmentation in PET/CT Journal Article


Authors: Li, L.; Zhao, X.; Lu, W.; Tan, S.
Article Title: Deep learning for variational multimodality tumor segmentation in PET/CT
Abstract: Positron emission tomography/computed tomography (PET/CT) imaging can simultaneously acquire functional metabolic information and anatomical information of the human body. How to rationally fuse the complementary information in PET/CT for accurate tumor segmentation is challenging. In this study, a novel deep learning based variational method was proposed to automatically fuse multimodality information for tumor segmentation in PET/CT. A 3D fully convolutional network (FCN) was first designed and trained to produce a probability map from the CT image. The learnt probability map describes the probability of each CT voxel belonging to the tumor or the background, and roughly distinguishes the tumor from its surrounding soft tissues. A fuzzy variational model was then proposed to incorporate the probability map and the PET intensity image for an accurate multimodality tumor segmentation, where the probability map acted as a membership degree prior. A split Bregman algorithm was used to minimize the variational model. The proposed method was validated on a non-small cell lung cancer dataset with 84 PET/CT images. Experimental results demonstrated that: (1) Only a few training samples were needed for training the designed network to produce the probability map; (2) The proposed method can be applied to small datasets, normally seen in clinic research; (3) The proposed method successfully fused the complementary information in PET/CT, and outperformed two existing deep learning-based multimodality segmentation methods and other multimodality segmentation methods using traditional fusion strategies (without deep learning); (4) The proposed method had a good performance for tumor segmentation, even for those with Fluorodeoxyglucose (FDG) uptake inhomogeneity and blurred tumor edges (two major challenges in PET single modality segmentation) and complex surrounding soft tissues (one major challenge in CT single modality segmentation), and achieved an average dice similarity indexes (DSI) of 0.86 ± 0.05, sensitivity (SE) of 0.86 ± 0.07, positive predictive value (PPV) of 0.87 ± 0.10, volume error (VE) of 0.16 ± 0.12, and classification error (CE) of 0.30 ± 0.12. © 2019 Elsevier B.V.
Keywords: major clinical study; comparative study; positron emission tomography; sensitivity analysis; histology; soft tissue; computerized tomography; tumors; probability; image processing; positive predictive values; mathematical computing; fluorodeoxyglucose; predictive value; non small cell lung cancer; positron emission tomography/computed tomographies; image segmentation; fuzzy system; classification (of information); tumor segmentation; information fusion; radiological procedures; human; priority journal; article; deep learning; convolutional networks; convolutional neural network; variational method; ordinary differential equations; variational methods; positron emission tomography-computed tomography; anatomical information; pet/ct images; split bregman algorithms
Journal Title: Neurocomputing
Volume: 392
ISSN: 0925-2312
Publisher: Elsevier B.V.  
Date Published: 2020-01-01
Start Page: 277
End Page: 295
Language: English
DOI: 10.1016/j.neucom.2018.10.099
PROVIDER: scopus
PMCID: PMC7405839
PUBMED: 32773965
DOI/URL:
Notes: Article -- Export Date: 1 June 2020 -- Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors
  1. Wei   Lu
    70 Lu