An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images Journal Article


Authors: Qu, X.; Shi, Y.; Hou, Y.; Jiang, J.
Article Title: An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images
Abstract: Purpose: Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. Methods: In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. Results: For Dataset A, the proposed method achieved higher Dice (84.3 (Formula presented.) 10.0%), JSC (75.2 (Formula presented.) 10.7%), and F1 score (84.3 (Formula presented.) 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 (Formula presented.) 13.0%), JSC (83.7 (Formula presented.) 14.8%), and F1 score (90.7 (Formula presented.) 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 (Formula presented.) 13.1%), JSC (83.3 (Formula presented.) 14.8%), and F1 score (90.5 (Formula presented.) 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). Conclusions: We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number. © 2020 American Association of Physicists in Medicine
Keywords: breast cancer; medical imaging; tumors; diagnosis; segmentation; diseases; breast ultrasound; ultrasonic imaging; image segmentation; deep learning; segmentation accuracy; dice coefficient; breast ultrasound images; attention mechanisms; breast ultrasound image; buses; jaccard similarity coefficients; medical ultrasound imaging; parameter numbers
Journal Title: Medical Physics
Volume: 47
Issue: 11
ISSN: 0094-2405
Publisher: American Association of Physicists in Medicine  
Date Published: 2020-11-01
Start Page: 5702
End Page: 5714
Language: English
DOI: 10.1002/mp.14470
PUBMED: 32964449
PROVIDER: scopus
PMCID: PMC7905659
DOI/URL:
Notes: Article -- Export Date: 4 January 2021 -- Source: Scopus
Altmetric
Citation Impact
BMJ Impact Analytics
MSK Authors
  1. Jue Jiang
    78 Jiang