scholarly journals Application of super-resolution convolutional neural network technique to improve the quality of soft-tissue window cone-beam CT images

Author(s):  
Motoki Fukuda ◽  
Yoshiko Ariji ◽  
Munetaka Nitoh ◽  
Michihito Nozawa ◽  
Chiaki Kuwada ◽  
...  

Abstract Objectives To assess the feasibility of using a super-resolution convolutional neural network to improve the quality of cone-beam computed tomography (CBCT) images for visualizing soft-tissue structures. Methods Multidetector computed tomography (CT) images of 200 subjects who were assessed for the status of an impacted third molar were collected as training datasets. CBCT images of 10 subjects who were also examined with CT were collected as testing datasets. The training process used a modified U-Net and bone and soft-tissue window CT images. After creating a model to convert bone images to soft-tissue images, CBCT images were provided as input and the model outputted estimated CBCT images. These estimated CBCT images were then compared with soft-tissue window CBCT and CT images, using slices through approximately the same anatomical regions. Image evaluation was performed with subjective observations and histogram descriptions. Results The visibility of soft-tissue structures was improved by the technique, with high visibility being attained in the submandibular region, although visibility remained a little obscured in the maxillary region. Conclusions The feasibility of a deep learning-based super resolution technique to improve the visibility of soft-tissue structures on estimated CBCT images was verified.

2020 ◽  
Vol 65 (3) ◽  
pp. 035003 ◽  
Author(s):  
Nimu Yuan ◽  
Brandon Dyer ◽  
Shyam Rao ◽  
Quan Chen ◽  
Stanley Benedict ◽  
...  

2021 ◽  
Vol 11 (4) ◽  
pp. 1505
Author(s):  
Keisuke Manabe ◽  
Yusuke Asami ◽  
Tomonari Yamada ◽  
Hiroyuki Sugimori

Background and purpose. This study evaluated a modified specialized convolutional neural network (CNN) to improve the accuracy of medical images. Materials and Methods. We defined computed tomography (CT) images as belonging to one of the following 10 classes: head, neck, chest, abdomen, and pelvis with and without contrast media, with 10,000 images per class. We modified the CNN based on the AlexNet with an input size of 512 × 512. We resized the filter sizes of the convolution layer and max pooling. Using these modified CNNs, various models were created and evaluated. The improved CNN was evaluated to classify the presence or absence of the pancreas in the CT images. We compared the overall accuracy, which was calculated from images not used for training, to that of the ResNet. Results. The overall accuracies of the most improved CNN and ResNet in the 10 classes were 94.8% and 89.3%, respectively. The filter sizes of the improved CNN for the convolution layer were (13, 13), (7, 7), (5, 5), (5, 5), and (5, 5) in order from the first layer, and that of max-pooling was (7, 7). The calculation times of the most improved CNN and ResNet were 56 and 120 min, respectively. Regarding the classification of the pancreas, the overall accuracies of the most improved CNN and ResNet were 75.75% and 58.25%, respectively. The calculation times of the most improved CNN and ResNet were 36 and 55 min, respectively. Conclusion. By optimizing the filter size of the convolution layer and max-pooling of 512 × 512 images, we quickly obtained a highly accurate medical image classification model. This improved CNN can be useful for classifying lesions and anatomies for related diagnostic aid applications.


2020 ◽  
Vol 62 (10) ◽  
pp. 1257-1263
Author(s):  
Johanna Pitkänen ◽  
Juha Koikkalainen ◽  
Tuomas Nieminen ◽  
Ivan Marinkovic ◽  
Sami Curtze ◽  
...  

Abstract Purpose Severity of white matter lesion (WML) is typically evaluated on magnetic resonance images (MRI), yet the more accessible, faster, and less expensive method is computed tomography (CT). Our objective was to study whether WML can be automatically segmented from CT images using a convolutional neural network (CNN). The second aim was to compare CT segmentation with MRI segmentation. Methods The brain images from the Helsinki University Hospital clinical image archive were systematically screened to make CT-MRI image pairs. Selection criteria for the study were that both CT and MRI images were acquired within 6 weeks. In total, 147 image pairs were included. We used CNN to segment WML from CT images. Training and testing of CNN for CT was performed using 10-fold cross-validation, and the segmentation results were compared with the corresponding segmentations from MRI. Results A Pearson correlation of 0.94 was obtained between the automatic WML volumes of MRI and CT segmentations. The average Dice similarity index validating the overlap between CT and FLAIR segmentations was 0.68 for the Fazekas 3 group. Conclusion CNN-based segmentation of CT images may provide a means to evaluate the severity of WML and establish a link between CT WML patterns and the current standard MRI-based visual rating scale.


Sign in / Sign up

Export Citation Format

Share Document