BGRA-Net: Boundary-Guided and Region-Aware Convolutional Neural Network for the Segmentation of Breast Ultrasound Images

Author(s):  
Xiang Zhang ◽  
Xuanya Li ◽  
Kai Hu ◽  
Xieping Gao
Author(s):  
Masayuki Kikuchi ◽  
Tetsu Hayashida ◽  
Rurina Watanuki ◽  
Ayako Nakashoji ◽  
Yuko Kawai ◽  
...  

Author(s):  
Victoria Wu

Introduction: Scoliosis, an excessive curvature of the spine, affects approximately 1 in 1,000 individuals. As a result, there have formerly been implementations of mandatory scoliosis screening procedures. Screening programs are no longer widely used as the harms often outweigh the benefits; it causes many adolescents to undergo frequent diagnosis X-ray procedure This makes spinal ultrasounds an ideal substitute for scoliosis screening in patients, as it does not expose them to those levels of radiation. Spinal curvatures can be accurately computed from the location of spinal transverse processes, by measuring the vertebral angle from a reference line [1]. However, ultrasound images are less clear than x-ray images, making it difficult to identify the spinal processes. To overcome this, we employ deep learning using a convolutional neural network, which is a powerful tool for computer vision and image classification [2]. Method: A total of 2,752 ultrasound images were recorded from a spine phantom to train a convolutional neural network. Subsequently, we took another recording of 747 images to be used for testing. All the ultrasound images from the scans were then segmented manually, using the 3D Slicer (www.slicer.org) software. Next, the dataset was fed through a convolutional neural network. The network used was a modified version of GoogLeNet (Inception v1), with 2 linearly stacked inception models. This network was chosen because it provided a balance between accurate performance, and time efficient computations. Results: Deep learning classification using the Inception model achieved an accuracy of 84% for the phantom scan.  Conclusion: The classification model performs with considerable accuracy. Better accuracy needs to be achieved, possibly with more available data and improvements in the classification model.  Acknowledgements: G. Fichtinger is supported as a Canada Research Chair in Computer-Integrated Surgery. This work was funded, in part, by NIH/NIBIB and NIH/NIGMS (via grant 1R01EB021396-01A1 - Slicer+PLUS: Point-of-Care Ultrasound) and by CANARIE’s Research Software Program.    Figure 1: Ultrasound scan containing a transverse process (left), and ultrasound scan containing no transverse process (right).                                Figure 2: Accuracy of classification for training (red) and validation (blue). References:           Ungi T, King F, Kempston M, Keri Z, Lasso A, Mousavi P, Rudan J, Borschneck DP, Fichtinger G. Spinal Curvature Measurement by Tracked Ultrasound Snapshots. Ultrasound in Medicine and Biology, 40(2):447-54, Feb 2014.           Krizhevsky A, Sutskeyer I, Hinton GE. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems 25:1097-1105. 


Author(s):  
Arun Asokan Nair ◽  
Mardava Rajugopal Gubbi ◽  
Trac Duy Tran ◽  
Austin Reiter ◽  
Muyinatu A. Lediju Bell

2020 ◽  
Vol 79 (9) ◽  
pp. 1189-1193
Author(s):  
Anders Bossel Holst Christensen ◽  
Søren Andreas Just ◽  
Jakob Kristian Holm Andersen ◽  
Thiusius Rajeeth Savarimuthu

ObjectivesWe have previously shown that neural network technology can be used for scoring arthritis disease activity in ultrasound images from rheumatoid arthritis (RA) patients, giving scores according to the EULAR-OMERACT grading system. We have now further developed the architecture of this neural network and can here present a new idea applying cascaded convolutional neural network (CNN) design with even better results. We evaluate the generalisability of this method on unseen data, comparing the CNN with an expert rheumatologist.MethodsThe images were graded by an expert rheumatologist according to the EULAR-OMERACT synovitis scoring system. CNNs were systematically trained to find the best configuration. The algorithms were evaluated on a separate test data set and compared with the gradings of an expert rheumatologist on a per-joint basis using a Kappa statistic, and on a per-patient basis using a Wilcoxon signed-rank test.ResultsWith 1678 images available for training and 322 images for testing the model, it achieved an overall four-class accuracy of 83.9%. On a per-patient level, there was no significant difference between the classifications of the model and of a human expert (p=0.85). Our original CNN had a four-class accuracy of 75.0%.ConclusionsUsing a new network architecture we have further enhanced the algorithm and have shown strong agreement with an expert rheumatologist on a per-joint basis and on a per-patient basis. This emphasises the potential of using CNNs with this architecture as a strong assistive tool for the objective assessment of disease activity of RA patients.


Sign in / Sign up

Export Citation Format

Share Document