A Multi-Cluster Random Forests-Based Approach to Super-Resolution of Abdominal CT Images Using Deep Neural Networks

Author(s):  
Mahdieh Akbari ◽  
Amir Hossein Foruzan ◽  
Yen-Wei Chen
2021 ◽  
Vol 104 ◽  
pp. 107185 ◽  
Author(s):  
Ying Da Wang ◽  
Mehdi Shabaninejad ◽  
Ryan T. Armstrong ◽  
Peyman Mostaghimi

2018 ◽  
Vol 28 (4) ◽  
pp. 735-744 ◽  
Author(s):  
Michał Koziarski ◽  
Bogusław Cyganek

Abstract Due to the advances made in recent years, methods based on deep neural networks have been able to achieve a state-of-the-art performance in various computer vision problems. In some tasks, such as image recognition, neural-based approaches have even been able to surpass human performance. However, the benchmarks on which neural networks achieve these impressive results usually consist of fairly high quality data. On the other hand, in practical applications we are often faced with images of low quality, affected by factors such as low resolution, presence of noise or a small dynamic range. It is unclear how resilient deep neural networks are to the presence of such factors. In this paper we experimentally evaluate the impact of low resolution on the classification accuracy of several notable neural architectures of recent years. Furthermore, we examine the possibility of improving neural networks’ performance in the task of low resolution image recognition by applying super-resolution prior to classification. The results of our experiments indicate that contemporary neural architectures remain significantly affected by low image resolution. By applying super-resolution prior to classification we were able to alleviate this issue to a large extent as long as the resolution of the images did not decrease too severely. However, in the case of very low resolution images the classification accuracy remained considerably affected.


Author(s):  
Woojin Jeong ◽  
Hyeon Seok Yang ◽  
Bok Gyu Han ◽  
Jae Jun Sim ◽  
Sejin Park ◽  
...  

2021 ◽  
pp. 14-23
Author(s):  
Jianing Wang ◽  
Dingjie Su ◽  
Yubo Fan ◽  
Srijata Chakravorti ◽  
Jack H. Noble ◽  
...  

2022 ◽  
Vol 8 (1) ◽  
pp. 11
Author(s):  
Gakuto Aoyama ◽  
Longfei Zhao ◽  
Shun Zhao ◽  
Xiao Xue ◽  
Yunxin Zhong ◽  
...  

Accurate morphological information on aortic valve cusps is critical in treatment planning. Image segmentation is necessary to acquire this information, but manual segmentation is tedious and time consuming. In this paper, we propose a fully automatic aortic valve cusps segmentation method from CT images by combining two deep neural networks, spatial configuration-Net for detecting anatomical landmarks and U-Net for segmentation of aortic valve components. A total of 258 CT volumes of end systolic and end diastolic phases, which include cases with and without severe calcifications, were collected and manually annotated for each aortic valve component. The collected CT volumes were split 6:2:2 for the training, validation and test steps, and our method was evaluated by five-fold cross validation. The segmentation was successful for all CT volumes with 69.26 s as mean processing time. For the segmentation results of the aortic root, the right-coronary cusp, the left-coronary cusp and the non-coronary cusp, mean Dice Coefficient were 0.95, 0.70, 0.69, and 0.67, respectively. There were strong correlations between measurement values automatically calculated based on the annotations and those based on the segmentation results. The results suggest that our method can be used to automatically obtain measurement values for aortic valve morphology.


Sign in / Sign up

Export Citation Format

Share Document