Locally adaptive contrast enhancement using convolutional neural network

Author(s):  
Bok Gyu Han ◽  
Hyeon Seok Yang ◽  
Young Shik Moon
Author(s):  
Girindra Wardhana ◽  
Hamid Naghibi ◽  
Beril Sirmacek ◽  
Momen Abayazid

Abstract Purpose We investigated the parameter configuration in the automatic liver and tumor segmentation using a convolutional neural network based on 2.5D model. The implementation of 2.5D model shows promising results since it allows the network to have a deeper and wider network architecture while still accommodates the 3D information. However, there has been no detailed investigation of the parameter configurations on this type of network model. Methods Some parameters, such as the number of stacked layers, image contrast, and the number of network layers, were studied and implemented on neural networks based on 2.5D model. Networks are trained and tested by utilizing the dataset from liver and tumor segmentation challenge (LiTS). The network performance was further evaluated by comparing the network segmentation with manual segmentation from nine technical physicians and an experienced radiologist. Results Slice arrangement testing shows that multiple stacked layers have better performance than a single-layer network. However, the dice scores start decreasing when the number of stacked layers is more than three layers. Adding higher number of layers would cause overfitting on the training set. In contrast enhancement test, implementing contrast enhancement method did not show a statistically significant different to the network performance. While in the network layer test, adding more layers to the network architecture does not always correspond to the increasing dice score result of the network. Conclusions This paper compares the performance of the network based on 2.5D model using different parameter configurations. The result obtained shows the effect of each parameter and allow the selection of the best configuration in order to improve the network performance in the application of automatic liver and tumor segmentation.


2021 ◽  
Vol 11 ◽  
Author(s):  
Ge Ren ◽  
Sai-kit Lam ◽  
Jiang Zhang ◽  
Haonan Xiao ◽  
Andy Lai-yin Cheung ◽  
...  

Functional lung avoidance radiation therapy aims to minimize dose delivery to the normal lung tissue while favoring dose deposition in the defective lung tissue based on the regional function information. However, the clinical acquisition of pulmonary functional images is resource-demanding, inconvenient, and technically challenging. This study aims to investigate the deep learning-based lung functional image synthesis from the CT domain. Forty-two pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. A deep learning-based framework (including image preparation, image processing, and proposed convolutional neural network) was adopted to extract features from 3D CT images and synthesize perfusion as estimations of regional lung function. Ablation experiments were performed to assess the effects of each framework component by removing each element of the framework and analyzing the testing performances. Major results showed that the removal of the CT contrast enhancement component in the image processing resulted in the largest drop in framework performance, compared to the optimal performance (~12%). In the CNN part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing one of them resulted in a 3–5% decline in performance. The proposed CNN improved ~4% overall performance and ~350% computational efficiency, compared to the U-Net model. The deep convolutional neural network, in conjunction with image processing for feature enhancement, is capable of feature extraction from CT images for pulmonary perfusion synthesis. In the proposed framework, image processing, especially CT contrast enhancement, plays a crucial role in the perfusion synthesis. This CTPM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for functional lung avoidance radiation therapy.


2019 ◽  
Vol 9 (2) ◽  
pp. 141
Author(s):  
Hartanto Ignatius ◽  
Ricky Chandra ◽  
Nicholas Bohdan ◽  
Abdi Dharma

<p class="JGI-AbstractIsi">Untreated diabetes mellitus will cause complications, and one of the diseases caused by it is Diabetic Retinopathy (DR). Machine learning is one of the methods that can be used to classify DR. Convolutional Neural Network (CNN) is a branch of machine learning that can classify images with reasonable accuracy. The Messidor dataset, which has 1,200 images, is often used as a dataset for the DR classification. Before training the model, we carried out several data preprocessing, such as labeling, resizing, cropping, separation of the green channel of images, contrast enhancement, and changing image extensions. In this paper, we proposed three methods of DR classification: Simple CNN, Le-Net, and DRnet model. The accuracy of testing of the several models of test data was 46.7%, 51.1%, and 58.3% Based on the research, we can see that DR classification must use a deep architecture so that the feature of the DR can be recognized. In this DR classification, DRnet achieved better accuracy with an average of 9.4% compared to Simple CNN and Le-Net model.</p>


2019 ◽  
Vol 9 (2) ◽  
pp. 141-150
Author(s):  
Hartanto Ignatius ◽  
Ricky Chandra ◽  
Nicholas Bohdan ◽  
Abdi Dharma

Untreated diabetes mellitus will cause complications, and one of the diseases caused by it is Diabetic Retinopathy (DR). Machine learning is one of the methods that can be used to classify DR. Convolutional Neural Network (CNN) is a branch of machine learning that can classify images with reasonable accuracy. The Messidor dataset, which has 1,200 images, is often used as a dataset for the DR classification. Before training the model, we carried out several data preprocessing, such as labeling, resizing, cropping, separation of the green channel of images, contrast enhancement, and changing image extensions. In this paper, we proposed three methods of DR classification: Simple CNN, Le-Net, and DRnet model. The accuracy of testing of the several models of test data was 46.7%, 51.1%, and 58.3% Based on the research, we can see that DR classification must use a deep architecture so that the feature of the DR can be recognized. In this DR classification, DRnet achieved better accuracy with an average of 9.4% compared to Simple CNN and Le-Net model.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1318
Author(s):  
Pengpeng Yang

Contrast enhancement forensics techniques have always been of great interest for the image forensics community, as they can be an effective tool for recovering image history and identifying tampered images. Although several contrast enhancement forensic algorithms have been proposed, their accuracy and robustness against some kinds of processing are still unsatisfactory. In order to attenuate such deficiency, in this paper, we propose a new framework based on dual-domain fusion convolutional neural network to fuse the features of pixel and histogram domains for contrast enhancement forensics. Specifically, we first present a pixel-domain convolutional neural network to automatically capture the patterns of contrast-enhanced images in the pixel domain. Then, we present a histogram-domain convolutional neural network to extract the features in the histogram domain. The feature representations of pixel and histogram domains are fused and fed into two fully connected layers for the classification of contrast-enhanced images. Experimental results show that the proposed method achieves better performance and is robust against pre-JPEG compression and antiforensics attacks, obtaining over 99% detection accuracy for JPEG-compressed images with different QFs and antiforensics attack. In addition, a strategy for performance improvements of CNN-based forensics is explored, which could provide guidance for the design of CNN-based forensics tools.


2020 ◽  
Author(s):  
S Kashin ◽  
D Zavyalov ◽  
A Rusakov ◽  
V Khryashchev ◽  
A Lebedev

2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document