scholarly journals Enhancing Multi Exposure Images Using Convolution Neural Network

Author(s):  
Sunitha Nandhini A ◽  
Anjani A L ◽  
Indhuja R ◽  
Jeevitha D

Due to the poor lighting condition and restricted dynamic vary of digital imaging devices, the recorded photos are usually under-/over-exposed and with low distinction. Most of the previous single image distinction improvement (SICE) strategies modify the tone curve to correct the distinction of an associated input image. Those strategies, however, typically fail in revealing image details due to the restricted data in a very single image. On the opposite hand, the SICE task is often higher accomplished if we will learn additional info from suitably collected coaching information. In this paper, we have a tendency to propose to use the convolutional neural network (CNN) to coach SICE attention. One key issue is the way to construct a coaching information set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this finish, we have a tendency to build a large-scale multi-exposure image knowledge set, that contains 589 in an elaborate way chosen high-resolution multi-exposure sequences with four, 413 images. Thirteen representatives multi-exposure image fusion and stack-based high dynamic vary imaging algorithms are accustomed urge the excellence enhanced footage for each sequence, and subjective experiments are conducted to screen the best quality one because of the reference image of every scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the benefits of our methodology over existing SICE strategies with a major margin.

IJOSTHE ◽  
2020 ◽  
Vol 7 (1) ◽  
pp. 8
Author(s):  
Puspad Kumar Sharma ◽  
Nitesh Gupta ◽  
Anurag Shrivastava

Due to camera resolution or any lighting condition, captured image are generally over-exposed or under-exposed conditions. So, there is need of some enhancement techniques that improvise these artifacts from recorded pictures or images. So, the objective of image enhancement and adjustment techniques is to improve the quality and characteristics of an image. In general terms, the enhancement of image distorts the original numerical values of an image. Therefore, it is required to design such enhancement technique that do not compromise with the quality of the image. The optimization of the image extracts the characteristics of the image instead of restoring the degraded image. The improvement of the image involves the degraded image processing and the improvement of its visual aspect. A lot of research has been done to improve the image. Many research works have been done in this field. One among them is deep learning. Most of the existing contrast enhancement methods, adjust the tone curve to correct the contrast of an input image but doesn’t work efficiently due to limited amount of information contained in a single image. In this research, the CNN with edge adjustment is proposed. By applying CNN with Edge adjustment technique, the input low contrast images are capable to adapt according to high quality enhancement. The result analysis shows that the developed technique significantly advantages over existing methods.


Author(s):  
Michael schatz ◽  
Joachim Jäger ◽  
Marin van Heel

Lumbricus terrestris erythrocruorin is a giant oxygen-transporting macromolecule in the blood of the common earth worm (worm "hemoglobin"). In our current study, we use specimens (kindly provided by Drs W.E. Royer and W.A. Hendrickson) embedded in vitreous ice (1) to avoid artefacts encountered with the negative stain preparation technigue used in previous studies (2-4).Although the molecular structure is well preserved in vitreous ice, the low contrast and high noise level in the micrographs represent a serious problem in image interpretation. Moreover, the molecules can exhibit many different orientations relative to the object plane of the microscope in this type of preparation. Existing techniques of analysis requiring alignment of the molecular views relative to one or more reference images often thus yield unsatisfactory results.We use a new method in which first rotation-, translation- and mirror invariant functions (5) are derived from the large set of input images, which functions are subsequently classified automatically using multivariate statistical techniques (6). The different molecular views in the data set can therewith be found unbiasedly (5). Within each class, all images are aligned relative to that member of the class which contributes least to the classes′ internal variance (6). This reference image is thus the most typical member of the class. Finally the aligned images from each class are averaged resulting in molecular views with enhanced statistical resolution.


Author(s):  
Chenggang Yan ◽  
Tong Teng ◽  
Yutao Liu ◽  
Yongbing Zhang ◽  
Haoqian Wang ◽  
...  

The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient. To tackle such issue, in this article, we propose a novel scheme for precise NR IQA, which includes two successive steps, i.e., distortion identification and targeted quality evaluation. In the first step, we employ the well-known Inception-ResNet-v2 neural network to train a classifier that classifies the possible distortion in the image into the four most common distortion types, i.e., Gaussian white noise (WN), Gaussian blur (GB), jpeg compression (JPEG), and jpeg2000 compression (JP2K). Specifically, the deep neural network is trained on the large-scale Waterloo Exploration database, which ensures the robustness and high performance of distortion classification. In the second step, after determining the distortion type of the image, we then design a specific approach to quantify the image distortion level, which can estimate the image quality specially and more precisely. Extensive experiments performed on LIVE, TID2013, CSIQ, and Waterloo Exploration databases demonstrate that (1) the accuracy of our distortion classification is higher than that of the state-of-the-art distortion classification methods, and (2) the proposed NR IQA method outperforms the state-of-the-art NR IQA methods in quantifying the image quality.


2020 ◽  
Vol 12 (11) ◽  
pp. 1743
Author(s):  
Artur M. Gafurov ◽  
Oleg P. Yermolayev

Transition from manual (visual) interpretation to fully automated gully detection is an important task for quantitative assessment of modern gully erosion, especially when it comes to large mapping areas. Existing approaches to semi-automated gully detection are based on either object-oriented selection based on multispectral images or gully selection based on a probabilistic model obtained using digital elevation models (DEMs). These approaches cannot be used for the assessment of gully erosion on the territory of the European part of Russia most affected by gully erosion due to the lack of national large-scale DEM and limited resolution of open source multispectral satellite images. An approach based on the use of convolutional neural networks for automated gully detection on the RGB-synthesis of ultra-high resolution satellite images publicly available for the test region of the east of the Russian Plain with intensive basin erosion has been proposed and developed. The Keras library and U-Net architecture of convolutional neural networks were used for training. Preliminary results of application of the trained gully erosion convolutional neural network (GECNN) allow asserting that the algorithm performs well in detecting active gullies, well differentiates gullies from other linear forms of slope erosion — rills and balkas, but so far has errors in detecting complex gully systems. Also, GECNN does not identify a gully in 10% of cases and in another 10% of cases it identifies not a gully. To solve these problems, it is necessary to additionally train the neural network on the enlarged training data set.


2021 ◽  
Vol 15 ◽  
Author(s):  
Lixing Huang ◽  
Jietao Diao ◽  
Hongshan Nie ◽  
Wei Wang ◽  
Zhiwei Li ◽  
...  

The memristor-based convolutional neural network (CNN) gives full play to the advantages of memristive devices, such as low power consumption, high integration density, and strong network recognition capability. Consequently, it is very suitable for building a wearable embedded application system and has broad application prospects in image classification, speech recognition, and other fields. However, limited by the manufacturing process of memristive devices, high-precision weight devices are currently difficult to be applied in large-scale. In the same time, high-precision neuron activation function also further increases the complexity of network hardware implementation. In response to this, this paper proposes a configurable full-binary convolutional neural network (CFB-CNN) architecture, whose inputs, weights, and neurons are all binary values. The neurons are proportionally configured to two modes for different non-ideal situations. The architecture performance is verified based on the MNIST data set, and the influence of device yield and resistance fluctuations under different neuron configurations on network performance is also analyzed. The results show that the recognition accuracy of the 2-layer network is about 98.2%. When the yield rate is about 64% and the hidden neuron mode is configured as −1 and +1, namely ±1 MD, the CFB-CNN architecture achieves about 91.28% recognition accuracy. Whereas the resistance variation is about 26% and the hidden neuron mode configuration is 0 and 1, namely 01 MD, the CFB-CNN architecture gains about 93.43% recognition accuracy. Furthermore, memristors have been demonstrated as one of the most promising devices in neuromorphic computing for its synaptic plasticity. Therefore, the CFB-CNN architecture based on memristor is SNN-compatible, which is verified using the number of pulses to encode pixel values in this paper.


Author(s):  
M. Nazmuzzaman Khan ◽  
Sohel Anwar

Abstract Current image classification techniques for weed detection (classic vision techniques and deep-neural net) provide encouraging results under controlled environment. But most of the algorithms are not robust enough for real-world application. Different lighting conditions and shadows directly impact vegetation color. Varying outdoor lighting conditions create different colors, noise levels, contrast and brightness. High component of illumination causes sensor (industrial camera) saturation. As a result, threshold-based classification algorithms usually fail. To overcome this shortfall, we used visible spectral-index based segmentation to segment the weeds from background. Mean, variance, kurtosis, and skewness are calculated for each input image and image quality (good or bad) is determined. Bad quality image is converted to good-quality image using contrast limited adaptive histogram equalization (CLAHE) before segmentation. A convolution neural network (CNN) based classifier is then trained to classify three different types of weed (Ragweed, Pigweed and Cocklebur) common in a corn field. The main objective of this work is to construct a robust classifier, capable of classifying between three weed species in the presence of occlusion, noise, illumination variation, and motion blurring. Proposed histogram statistics-based image enhancement process solved weed mis-segmentation under extreme lighting condition. CNN based classifier shows accurate, robust classification under low-to-mid level motion blurring and various levels of noise.


2016 ◽  
Vol 78 (12-2) ◽  
Author(s):  
Norma Alias ◽  
Husna Mohamad Mohsin ◽  
Maizatul Nadirah Mustaffa ◽  
Siti Hafilah Mohd Saimi ◽  
Ridhwan Reyaz

Eye movement behaviour is related to human brain activation either during asleep or awake. The aim of this paper is to measure the three types of eye movement by using the data classification of electroencephalogram (EEG) signals. It will be illustrated and train using the artificial neural network (ANN) method, in which the measurement of eye movement is based on eye blinks close and open, moves to the left and right as well as eye movement upwards and downwards. The integrated of ANN with EEG digital data signals is to train the large-scale digital data and thus predict the eye movement behaviour with stress activity. Since this study is using large-scale digital data, the parallelization of integrated ANN with EEG signals has been implemented on Compute Unified Device Architecture (CUDA) supported by heterogeneous CPU-GPU systems. The real data set from eye therapy industry, IC Herbz Sdn Bhd was carried out in order to validate and simulate the eye movement behaviour. Parallel performance analyses can be captured based on execution time, speedup, efficiency, and computational complexity.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4462
Author(s):  
Malik Summair Asghar ◽  
Saad Arslan ◽  
HyungWon Kim

To realize a large-scale Spiking Neural Network (SNN) on hardware for mobile applications, area and power optimized electronic circuit design is critical. In this work, an area and power optimized hardware implementation of a large-scale SNN for real time IoT applications is presented. The analog Complementary Metal Oxide Semiconductor (CMOS) implementation incorporates neuron and synaptic circuits optimized for area and power consumption. The asynchronous neuronal circuits implemented benefit from higher energy efficiency and higher sensitivity. The proposed synapse circuit based on Binary Exponential Charge Injector (BECI) saves area and power consumption, and provides design scalability for higher resolutions. The SNN model implemented is optimized for 9 × 9 pixel input image and minimum bit-width weights that can satisfy target accuracy, occupies less area and power consumption. Moreover, the spiking neural network is replicated in full digital implementation for area and power comparisons. The SNN chip integrated from neuron and synapse circuits is capable of pattern recognition. The proposed SNN chip is fabricated using 180 nm CMOS process, which occupies a 3.6 mm2 chip core area, and achieves a classification accuracy of 94.66% for the MNIST dataset. The proposed SNN chip consumes an average power of 1.06 mW—20 times lower than the digital implementation.


Author(s):  
Sergey Stankevich ◽  
Oleh Maslenko ◽  
Vitalii Andronov

A novel flowchart for small-size objects identification in satellite images of insufficient resolution within the graphic reference images database using neural network technology based on compromise contradiction, i.e. simultaneously the resolution enhancement of the object segment of input image and the resolution reduction of the reference image to joint resolution through the simulation of the imaging system has been proposed. This is necessary due to a significant discrepancy between the resolutions of the input image and the graphic reference images used for identification. The required level of resolution enhancement for satellite images, as a rule, is unattainable, and a significant coarsening of reference images is undesirable because of identification errors. Therefore, a certain intermediate spatial resolution is used for identification, which, on the one hand, can be obtained, and on the other the loss of information contained in the reference image is still acceptable. The intermediate resolution is determined by simulating the process of image acquisition with satellite imaging system. To facilitate such simulation, it is advisable to perform it in the frequency domain, where the advanced Fourier analysis is available and, as a rule, all the necessary transfer properties of the links of image formation chain are known. Three main functional elements are engaged for identification: an artificial neural network for the resolution enhancement of input images, a module of frequency-domain simulating of the graphical reference satellite imaging and an artificial neural network for comparing the enhanced object segment with the reference model images. The feasibility of the described approach is demonstrated by the example of successful identification of the sea vessel image in the SPOT-7 satellite image. Currently, the works are under way to compare the performance of a neural network platforms variety for small-size objects identification in satellite images aa well as to assess achievable accuracy.


Sign in / Sign up

Export Citation Format

Share Document