Development of CAD system for 3D breast ultrasound images

2003 ◽  
pp. 368-371 ◽  
Author(s):  
Takeshi Hara ◽  
Daisuke Fukuoka ◽  
Hiroshi Fujita ◽  
Tokiko Endo ◽  
Woo Kyung Moon
2020 ◽  
Vol 10 (5) ◽  
pp. 1830
Author(s):  
Yi-Wei Chang ◽  
Yun-Ru Chen ◽  
Chien-Chuan Ko ◽  
Wei-Yang Lin ◽  
Keng-Pei Lin

The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast Imaging Reporting and Data System (BI-RADS) to evaluate far more breast lesion severities compared to traditional diagnoses according to five-criterion categories of masses composition described as follows: shape, orientation, margin, echo pattern, and posterior features. However, there exist some problems, such as intensity differences and different resolutions in image acquisition among different types of ultrasound imaging modalities so that clinicians cannot always identify accurately the BI-RADS categories or disease severities. To this end, this article adopted three different brands of ultrasound scanners to fetch breast images for our experimental samples. The breast lesion was detected on the original image using preprocessing, image segmentation, etc. The breast tumor’s severity was evaluated on the features of the breast lesion via our proposed classifiers according to the BI-RADS standard rather than traditional assessment on the severity; i.e., merely using benign or malignant. In this work, we mainly focused on the BI-RADS categories 2–5 after the stage of segmentation as a result of the clinical practice. Moreover, several features related to lesion severities based on the selected BI-RADS categories were introduced into three machine learning classifiers, including a Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) combined with feature selection to develop a multi-class assessment of breast tumor severity based on BI-RADS. Experimental results show that the proposed CAD system based on BI-RADS can obtain the identification accuracies with SVM, RF, and CNN reaching 80.00%, 77.78%, and 85.42%, respectively. We also validated the performance and adaptability of the classification using different ultrasound scanners. Results also indicate that the evaluations of F-score based on CNN can obtain measures higher than 75% (i.e., prominent adaptability) when samples were tested on various BI-RADS categories.


Author(s):  
Strivathsav Ashwin Ramamoorthy ◽  
Varun P. Gopi

Breast cancer is a serious disease among women, and its early detection is very crucial for the treatment of cancer. To assist radiologists who manually delineate the tumour from the ultrasound image an automatic computerized method of detection called CAD (computer-aided diagnosis) is developed to provide valuable inputs for radiologists. The CAD systems is divided into many branches like pre-processing, segmentation, feature extraction, and classification. This chapter solely focuses on the first two branches of the CAD system the pre-processing and segmentation. Ultrasound images acquired depends on the operator expertise and is found to be of low contrast and fuzzy in nature. For the pre-processing branch, a contrast enhancement algorithm based on fuzzy logic is implemented which could help in the efficient delineation of the tumour from ultrasound image.


2020 ◽  
Vol 43 (1) ◽  
pp. 29-45
Author(s):  
Alex Noel Joseph Raj ◽  
Ruban Nersisson ◽  
Vijayalakshmi G. V. Mahesh ◽  
Zhemin Zhuang

Nipple is a vital landmark in the breast lesion diagnosis. Although there are advanced computer-aided detection (CADe) systems for nipple detection in breast mediolateral oblique (MLO) views of mammogram images, few academic works address the coronal views of breast ultrasound (BUS) images. This paper addresses a novel CADe system to locate the Nipple Shadow Area (NSA) in ultrasound images. Here the Hu Moments and Gray-level Co-occurrence Matrix (GLCM) were calculated through an iterative sliding window for the extraction of shape and texture features. These features are then concatenated and fed into an Artificial Neural Network (ANN) to obtain probable NSA’s. Later, contour features, such as shape complexity through fractal dimension, edge distance from the periphery and contour area, were computed and passed into a Support Vector Machine (SVM) to identify the accurate NSA in each case. The coronal plane BUS dataset is built upon our own, which consists of 64 images from 13 patients. The test results show that the proposed CADe system achieves 91.99% accuracy, 97.55% specificity, 82.46% sensitivity and 88% F-score on our dataset.


2019 ◽  
Vol 121 ◽  
pp. 78-96 ◽  
Author(s):  
Mohammad I. Daoud ◽  
Ayman A. Atallah ◽  
Falah Awwad ◽  
Mahasen Al-Najjar ◽  
Rami Alazrai

Data in Brief ◽  
2020 ◽  
Vol 28 ◽  
pp. 104863 ◽  
Author(s):  
Walid Al-Dhabyani ◽  
Mohammed Gomaa ◽  
Hussien Khaled ◽  
Aly Fahmy

Diagnostics ◽  
2019 ◽  
Vol 9 (4) ◽  
pp. 176 ◽  
Author(s):  
Tomoyuki Fujioka ◽  
Mio Mori ◽  
Kazunori Kubota ◽  
Yuka Kikuchi ◽  
Leona Katsuta ◽  
...  

Deep convolutional generative adversarial networks (DCGANs) are newly developed tools for generating synthesized images. To determine the clinical utility of synthesized images, we generated breast ultrasound images and assessed their quality and clinical value. After retrospectively collecting 528 images of 144 benign masses and 529 images of 216 malignant masses in the breasts, synthesized images were generated using a DCGAN with 50, 100, 200, 500, and 1000 epochs. The synthesized (n = 20) and original (n = 40) images were evaluated by two radiologists, who scored them for overall quality, definition of anatomic structures, and visualization of the masses on a five-point scale. They also scored the possibility of images being original. Although there was no significant difference between the images synthesized with 1000 and 500 epochs, the latter were evaluated as being of higher quality than all other images. Moreover, 2.5%, 0%, 12.5%, 37.5%, and 22.5% of the images synthesized with 50, 100, 200, 500, and 1000 epochs, respectively, and 14% of the original images were indistinguishable from one another. Interobserver agreement was very good (|r| = 0.708–0.825, p < 0.001). Therefore, DCGAN can generate high-quality and realistic synthesized breast ultrasound images that are indistinguishable from the original images.


Sign in / Sign up

Export Citation Format

Share Document