A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images Journal Article


Authors: Qu, X.; Lu, H.; Tang, W.; Wang, S.; Zheng, D.; Hou, Y.; Jiang, J.
Article Title: A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images
Abstract: Purpose: Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA-ViT) network to overcome their disadvantages. Methods: In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze-and-excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation. Results: Cross-validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA-ViT network achieved high accuracy (88.71 (Formula presented.) 1.55%), recall (90.73 (Formula presented.) 1.57%), specificity (85.58 (Formula presented.) 3.35%), precision (90.77 (Formula presented.) 1.98%), F1 score (90.73 (Formula presented.) 1.24%), and Matthews correlation coefficient (MCC) (76.34 (Formula presented.) 3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA-ViT had highest accuracy (81.72 (Formula presented.) 2.99%), recall (64.45 (Formula presented.) 2.96%), specificity (90.28 (Formula presented.) 3.51%), precision (77.08 (Formula presented.) 7.21%), F1 score (70.11 (Formula presented.) 4.25%), and MCC (57.64 (Formula presented.) 6.88%). Conclusions: In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods. © 2022 American Association of Physicists in Medicine.
Keywords: controlled study; major clinical study; cancer diagnosis; diagnostic accuracy; sensitivity and specificity; breast cancer; classification; echomammography; diagnostic imaging; breast neoplasms; false negative result; correlation coefficient; computer assisted diagnosis; medical imaging; tumors; breast tumor; intermethod comparison; cancer classification; attention; image processing, computer-assisted; image processing; false positive result; diseases; tumor diagnosis; breast ultrasound; receiver operating characteristic; extraction; ultrasonography, mammary; kappa statistics; statistical tests; feature extraction; procedures; machine learning; natural language processing; ultrasonic applications; nonlinear dimensionality reduction; humans; human; female; article; deep learning; image classification; convolutional neural network; benign breast tumor; convolution neural network; breast ultrasound images; convolutional neural networks; densenet; transfer of learning; cross validation; breast ultrasound image; neural networks, computer; binary classification; high-accuracy; breast tumour; f1 scores; global feature; local feature; inceptionresnetv2; inceptionv3; mobilenetv2; multiclass classification; residual neural network; squeezenet; visual geometry group attention vision transformer network; xception
Journal Title: Medical Physics
Volume: 49
Issue: 9
ISSN: 0094-2405
Publisher: American Association of Physicists in Medicine  
Date Published: 2022-09-01
Start Page: 5787
End Page: 5798
Language: English
DOI: 10.1002/mp.15852
PUBMED: 35866492
PROVIDER: scopus
DOI/URL:
Notes: Article -- Export Date: 3 October 2022 -- Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors
  1. Jue Jiang
    78 Jiang