scholarly journals Incorporating the Breast Imaging Reporting and Data System Lexicon with a Fully Convolutional Network for Malignancy Detection on Breast Ultrasound

Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Yung-Hsien Hsieh ◽  
Fang-Rong Hsu ◽  
Seng-Tong Dai ◽  
Hsin-Ya Huang ◽  
Dar-Ren Chen ◽  
...  

In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.

Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2501 ◽  
Author(s):  
Yanan Song ◽  
Liang Gao ◽  
Xinyu Li ◽  
Weiming Shen

Deep learning is robust to the perturbation of a point cloud, which is an important data form in the Internet of Things. However, it cannot effectively capture the local information of the point cloud and recognize the fine-grained features of an object. Different levels of features in the deep learning network are integrated to obtain local information, but this strategy increases network complexity. This paper proposes an effective point cloud encoding method that facilitates the deep learning network to utilize the local information. An axis-aligned cube is used to search for a local region that represents the local information. All of the points in the local region are available to construct the feature representation of each point. These feature representations are then input to a deep learning network. Two well-known datasets, ModelNet40 shape classification benchmark and Stanford 3D Indoor Semantics Dataset, are used to test the performance of the proposed method. Compared with other methods with complicated structures, the proposed method with only a simple deep learning network, can achieve a higher accuracy in 3D object classification and semantic segmentation.


2020 ◽  
Vol 1 (2) ◽  
Author(s):  
Sakshi Srivastava ◽  
Prince Kumar ◽  
Vaishali Chaudhry ◽  
Anuj Singh

2021 ◽  
Author(s):  
Silvia Seoni ◽  
Giulia Matrone ◽  
Nicola Casali ◽  
Edoardo Spairani ◽  
Kristen M. Meiburger

PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253202
Author(s):  
Yanjun Guo ◽  
Xingguang Duan ◽  
Chengyi Wang ◽  
Huiqin Guo

This paper establishes a fully automatic real-time image segmentation and recognition system for breast ultrasound intervention robots. It adopts the basic architecture of a U-shaped convolutional network (U-Net), analyses the actual application scenarios of semantic segmentation of breast ultrasound images, and adds dropout layers to the U-Net architecture to reduce the redundancy in texture details and prevent overfitting. The main innovation of this paper is proposing an expanded training approach to obtain an expanded of U-Net. The output map of the expanded U-Net can retain texture details and edge features of breast tumours. Using the grey-level probability labels to train the U-Net is faster than using ordinary labels. The average Dice coefficient (standard deviation) and the average IOU coefficient (standard deviation) are 90.5% (±0.02) and 82.7% (±0.02), respectively, when using the expanded training approach. The Dice coefficient of the expanded U-Net is 7.6 larger than that of a general U-Net, and the IOU coefficient of the expanded U-Net is 11 larger than that of the general U-Net. The context of breast ultrasound images can be extracted, and texture details and edge features of tumours can be retained by the expanded U-Net. Using an expanded U-Net can quickly and automatically achieve precise segmentation and multi-class recognition of breast ultrasound images.


Sign in / Sign up

Export Citation Format

Share Document