Faster Region Convolutional Neural Networks Applied to Ultrasonic Images for Breast Lesion Detection and Classification

Author(s):  
Kaizhen Wei ◽  
Boyang Wang ◽  
Jafar Saniie
2019 ◽  
Vol 79 ◽  
pp. 106449 ◽  
Author(s):  
Guorong Cai ◽  
Jinshan Chen ◽  
Zebiao Wu ◽  
Haoming Tang ◽  
Yujun Liu ◽  
...  

Author(s):  
Danillo Roberto Pereira ◽  
Pedro P. Reboucas Filho ◽  
Gustavo Henrique de Rosa ◽  
Joao Paulo Papa ◽  
Victor Hugo C. de Albuquerque

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kazutoshi Ukai ◽  
Rashedur Rahman ◽  
Naomi Yagi ◽  
Keigo Hayashi ◽  
Akihiro Maruo ◽  
...  

AbstractPelvic fracture is one of the leading causes of death in the elderly, carrying a high risk of death within 1 year of fracture. This study proposes an automated method to detect pelvic fractures on 3-dimensional computed tomography (3D-CT). Deep convolutional neural networks (DCNNs) have been used for lesion detection on 2D and 3D medical images. However, training a DCNN directly using 3D images is complicated, computationally costly, and requires large amounts of training data. We propose a method that evaluates multiple, 2D, real-time object detection systems (YOLOv3 models) in parallel, in which each YOLOv3 model is trained using differently orientated 2D slab images reconstructed from 3D-CT. We assume that an appropriate reconstruction orientation would exist to optimally characterize image features of bone fractures on 3D-CT. Multiple YOLOv3 models in parallel detect 2D fracture candidates in different orientations simultaneously. The 3D fracture region is then obtained by integrating the 2D fracture candidates. The proposed method was validated in 93 subjects with bone fractures. Area under the curve (AUC) was 0.824, with 0.805 recall and 0.907 precision. The AUC with a single orientation was 0.652. This method was then applied to 112 subjects without bone fractures to evaluate over-detection. The proposed method successfully detected no bone fractures in all except 4 non-fracture subjects (96.4%).


2020 ◽  
Vol 36 (6) ◽  
pp. 428-438
Author(s):  
Thomas Wittenberg ◽  
Martin Raithel

<b><i>Background:</i></b> In the past, image-based computer-assisted diagnosis and detection systems have been driven mainly from the field of radiology, and more specifically mammography. Nevertheless, with the availability of large image data collections (known as the “Big Data” phenomenon) in correlation with developments from the domain of artificial intelligence (AI) and particularly so-called deep convolutional neural networks, computer-assisted detection of adenomas and polyps in real-time during screening colonoscopy has become feasible. <b><i>Summary:</i></b> With respect to these developments, the scope of this contribution is to provide a brief overview about the evolution of AI-based detection of adenomas and polyps during colonoscopy of the past 35 years, starting with the age of “handcrafted geometrical features” together with simple classification schemes, over the development and use of “texture-based features” and machine learning approaches, and ending with current developments in the field of deep learning using convolutional neural networks. In parallel, the need and necessity of large-scale clinical data will be discussed in order to develop such methods, up to commercially available AI products for automated detection of polyps (adenoma and benign neoplastic lesions). Finally, a short view into the future is made regarding further possibilities of AI methods within colonoscopy. <b><i>Key Messages:</i></b> Research<b><i></i></b>of<b><i></i></b>image-based lesion detection in colonoscopy data has a 35-year-old history. Milestones such as the Paris nomenclature, texture features, big data, and deep learning were essential for the development and availability of commercial AI-based systems for polyp detection.


Sign in / Sign up

Export Citation Format

Share Document