scholarly journals Full 3D Microwave Breast Imaging Using a Deep-Learning Technique

2020 ◽  
Vol 6 (8) ◽  
pp. 80 ◽  
Author(s):  
Vahab Khoshdel ◽  
Mohammad Asefi ◽  
Ahmed Ashraf ◽  
Joe LoVetri

A deep learning technique to enhance 3D images of the complex-valued permittivity of the breast obtained via microwave imaging is investigated. The developed technique is an extension of one created to enhance 2D images. We employ a 3D Convolutional Neural Network, based on the U-Net architecture, that takes in 3D images obtained using the Contrast-Source Inversion (CSI) method and attempts to produce the true 3D image of the permittivity. The training set consists of 3D CSI images, along with the true numerical phantom images from which the microwave scattered field utilized to create the CSI reconstructions was synthetically generated. Each numerical phantom varies with respect to the size, number, and location of tumors within the fibroglandular region. The reconstructed permittivity images produced by the proposed 3D U-Net show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but it also enhances the detectability of the tumors. We test the trained U-Net with 3D images obtained from experimentally collected microwave data as well as with images obtained synthetically. Significantly, the results illustrate that although the network was trained using only images obtained from synthetic data, it performed well with images obtained from both synthetic and experimental data. Quantitative evaluations are reported using Receiver Operating Characteristics (ROC) curves for the tumor detectability and RMS error for the enhancement of the reconstructions.

Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 4050 ◽  
Author(s):  
Vahab Khoshdel ◽  
Ahmed Ashraf ◽  
Joe LoVetri

We present a deep learning method used in conjunction with dual-modal microwave-ultrasound imaging to produce tomographic reconstructions of the complex-valued permittivity of numerical breast phantoms. We also assess tumor segmentation performance using the reconstructed permittivity as a feature. The contrast source inversion (CSI) technique is used to create the complex-permittivity images of the breast with ultrasound-derived tissue regions utilized as prior information. However, imaging artifacts make the detection of tumors difficult. To overcome this issue we train a convolutional neural network (CNN) that takes in, as input, the dual-modal CSI reconstruction and attempts to produce the true image of the complex tissue permittivity. The neural network consists of successive convolutional and downsampling layers, followed by successive deconvolutional and upsampling layers based on the U-Net architecture. To train the neural network, the input-output pairs consist of CSI’s dual-modal reconstructions, along with the true numerical phantom images from which the microwave scattered field was synthetically generated. The reconstructed permittivity images produced by the CNN show that the network is not only able to remove the artifacts that are typical of CSI reconstructions, but can also improve the detectability of tumors. The performance of the CNN is assessed using a four-fold cross-validation on our dataset that shows improvement over CSI both in terms of reconstruction error and tumor segmentation performance.


2021 ◽  
pp. 20200513
Author(s):  
Su-Jin Jeon ◽  
Jong-Pil Yun ◽  
Han-Gyeol Yeom ◽  
Woo-Sang Shin ◽  
Jong-Hyun Lee ◽  
...  

Objective: The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Methods: Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions. Results: The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals. Conclusions: The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Asma Baccouche ◽  
Begonya Garcia-Zapirain ◽  
Cristian Castillo Olea ◽  
Adel S. Elmaghraby

AbstractBreast cancer analysis implies that radiologists inspect mammograms to detect suspicious breast lesions and identify mass tumors. Artificial intelligence techniques offer automatic systems for breast mass segmentation to assist radiologists in their diagnosis. With the rapid development of deep learning and its application to medical imaging challenges, UNet and its variations is one of the state-of-the-art models for medical image segmentation that showed promising performance on mammography. In this paper, we propose an architecture, called Connected-UNets, which connects two UNets using additional modified skip connections. We integrate Atrous Spatial Pyramid Pooling (ASPP) in the two standard UNets to emphasize the contextual information within the encoder–decoder network architecture. We also apply the proposed architecture on the Attention UNet (AUNet) and the Residual UNet (ResUNet). We evaluated the proposed architectures on two publically available datasets, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM) and INbreast, and additionally on a private dataset. Experiments were also conducted using additional synthetic data using the cycle-consistent Generative Adversarial Network (CycleGAN) model between two unpaired datasets to augment and enhance the images. Qualitative and quantitative results show that the proposed architecture can achieve better automatic mass segmentation with a high Dice score of 89.52%, 95.28%, and 95.88% and Intersection over Union (IoU) score of 80.02%, 91.03%, and 92.27%, respectively, on CBIS-DDSM, INbreast, and the private dataset.


Author(s):  
Reijer Leijsen ◽  
Cornelis van den Berg ◽  
Andrew Webb ◽  
Rob Remis ◽  
Stefano Mandija

2011 ◽  
Vol 42 (2) ◽  
pp. 56-64 ◽  
Author(s):  
Remigiusz Szczepanowski

Conscious access to fear-relevant information is mediated by thresholdThe present report proposed a model of access consciousness to fear-relevant information according to which there is a threshold for emotional perception beyond that the subject makes hits with no false alarm. The model was examined by having the participants performed a confidence-ratings masking task with fearful faces. Measures of the thresholds for conscious access were taken by looking at the receiver operating characteristics (ROC) curves generated from a three-state low- and high-threshold (3-LHT) model by Krantz. Indeed, the analysis of the masking data revealed that the ROCs had threshold-like-nature (a two-limb shape) rather continuous (a curvilinear shape) challenging in this fashion the classical signal-detection view on perceptual processing. Moreover, the threshold ROC curve exhibited the specific y-intercepts relevant to conscious access performance. The study suggests that the threshold can be an intrinsic property of conscious access, mediating emotional contents between perceptual states and consciousness.


Sign in / Sign up

Export Citation Format

Share Document