scholarly journals A Review on Deep Image Contrast Enhancement

2020 ◽  
Vol 6 (1) ◽  
pp. 4
Author(s):  
Puspad Kumar Sharma ◽  
Nitesh Gupta ◽  
Anurag Shrivastava

In image processing applications, one of the main preprocessing phases is image enhancement that is used to produce high quality image or enhanced image than the original input image. These enhanced images can be used in many applications such as remote sensing applications, geo-satellite images, etc. The quality of an image is affected due to several conditions such as by poor illumination, atmospheric condition, wrong lens aperture setting of the camera, noise, etc [2]. So, such degraded/low exposure images are needed to be enhanced by increasing the brightness as well as its contrast and this can be possible by the method of image enhancement. In this research work different image enhancement techniques are discussed and reviewed with their results. The aim of this study is to determine the application of deep learning approaches that have been used for image enhancement. Deep learning is a machine learning approach which is currently revolutionizing a number of disciplines including image processing and computer vision. This paper will attempt to apply deep learning to image filtering, specifically low-light image enhancement. The review given in this paper is quite efficient for future researchers to overcome problems that helps in designing efficient algorithm which enhances quality of the image.

Here the proposed scheme mainly emphasizes the procedure of histogram equalization of images in more efficient way. Histogram equalization is required for image enhancement. Histogram spreads or flattens the histogram of an image and due to this the pixels with lower intensity values appear darker and the pixels with higher intensity values appear lighter. So the contrast of the input image is improved. For human interpretation various techniques of image enhancement have been widely used in different applications areas of image processing as the subjective quality of images is mainly important


2018 ◽  
Vol 10 (11) ◽  
pp. 1746 ◽  
Author(s):  
Raffaele Gaetano ◽  
Dino Ienco ◽  
Kenji Ose ◽  
Remi Cresson

The use of Very High Spatial Resolution (VHSR) imagery in remote sensing applications is nowadays a current practice whenever fine-scale monitoring of the earth’s surface is concerned. VHSR Land Cover classification, in particular, is currently a well-established tool to support decisions in several domains, including urban monitoring, agriculture, biodiversity, and environmental assessment. Additionally, land cover classification can be employed to annotate VHSR imagery with the aim of retrieving spatial statistics or areas with similar land cover. Modern VHSR sensors provide data at multiple spatial and spectral resolutions, most commonly as a couple of a higher-resolution single-band panchromatic (PAN) and a coarser multispectral (MS) imagery. In the typical land cover classification workflow, the multi-resolution input is preprocessed to generate a single multispectral image at the highest resolution available by means of a pan-sharpening process. Recently, deep learning approaches have shown the advantages of avoiding data preprocessing by letting machine learning algorithms automatically transform input data to best fit the classification task. Following this rationale, we here propose a new deep learning architecture to jointly use PAN and MS imagery for a direct classification without any prior image sharpening or resampling process. Our method, namely M u l t i R e s o L C C , consists of a two-branch end-to-end network which extracts features from each source at their native resolution and lately combine them to perform land cover classification at the PAN resolution. Experiments are carried out on two real-world scenarios over large areas with contrasted land cover characteristics. The experimental results underline the quality of our method while the characteristics of the proposed scenarios underline the applicability and the generality of our strategy in operational settings.


2018 ◽  
Vol 7 (3.3) ◽  
pp. 466 ◽  
Author(s):  
V S. Padmavathy ◽  
Dr R. Priya

Image Enhancement plays an essential role in a wide area of vision applications. Image enhancement is a technique used to enhance the qual-ity of the image such that it can be easily viewed by both men and machine.Contrast makes a visual difference that makes an object distin-guishable from background and other objects. The major goal of image contrast enhancement is to increase the visual quality of the image. In this research study, various image contrast enhancement techniques are reviewed. This research work also focuses on the comparative study of contrast enhancement techniques for identifying an effective contrast enhancement technique.  


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


Drones ◽  
2021 ◽  
Vol 5 (2) ◽  
pp. 52
Author(s):  
Thomas Lee ◽  
Susan Mckeever ◽  
Jane Courtney

With the rise of Deep Learning approaches in computer vision applications, significant strides have been made towards vehicular autonomy. Research activity in autonomous drone navigation has increased rapidly in the past five years, and drones are moving fast towards the ultimate goal of near-complete autonomy. However, while much work in the area focuses on specific tasks in drone navigation, the contribution to the overall goal of autonomy is often not assessed, and a comprehensive overview is needed. In this work, a taxonomy of drone navigation autonomy is established by mapping the definitions of vehicular autonomy levels, as defined by the Society of Automotive Engineers, to specific drone tasks in order to create a clear definition of autonomy when applied to drones. A top–down examination of research work in the area is conducted, focusing on drone navigation tasks, in order to understand the extent of research activity in each area. Autonomy levels are cross-checked against the drone navigation tasks addressed in each work to provide a framework for understanding the trajectory of current research. This work serves as a guide to research in drone autonomy with a particular focus on Deep Learning-based solutions, indicating key works and areas of opportunity for development of this area in the future.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Venkata Dasu Marri ◽  
Veera Narayana Reddy P. ◽  
Chandra Mohan Reddy S.

Purpose Image classification is a fundamental form of digital image processing in which pixels are labeled into one of the object classes present in the image. Multispectral image classification is a challenging task due to complexities associated with the images captured by satellites. Accurate image classification is highly essential in remote sensing applications. However, existing machine learning and deep learning–based classification methods could not provide desired accuracy. The purpose of this paper is to classify the objects in the satellite image with greater accuracy. Design/methodology/approach This paper proposes a deep learning-based automated method for classifying multispectral images. The central issue of this work is that data sets collected from public databases are first divided into a number of patches and their features are extracted. The features extracted from patches are then concatenated before a classification method is used to classify the objects in the image. Findings The performance of proposed modified velocity-based colliding bodies optimization method is compared with existing methods in terms of type-1 measures such as sensitivity, specificity, accuracy, net present value, F1 Score and Matthews correlation coefficient and type 2 measures such as false discovery rate and false positive rate. The statistical results obtained from the proposed method show better performance than existing methods. Originality/value In this work, multispectral image classification accuracy is improved with an optimization algorithm called modified velocity-based colliding bodies optimization.


Author(s):  
Darakhshan R. Khan

Region filling which has another name inpainting, is an approach to find the values of missing pixels from data available in the remaining portion of the image. The missing information must be recalculated in a distinctly convincing manner, such that, image look seamless. This research work has built a methodology for completely automating patch priority based region filling process. To reduce the computational time, low resolution image is constructed from input image. Based on texel of an image, patch size is determined. Several low resolution image with missing region filled is generated using region filling algorithm. Pixel information from these low resolution images is consolidated to produce single low resolution region filled image. Finally, super resolution algorithm is applied to enhance the quality of image and regain all specifics of image. This methodology of identifying patch size based on input fed has an advantage over filling algorithms which in true sense automate the process of region filling, to deal with sensitivity in region filling, algorithm different parameter settings are used and functioning with coarse version of image will notably reduce the computational time.


2021 ◽  
Vol 13 (19) ◽  
pp. 3859
Author(s):  
Joby M. Prince Czarnecki ◽  
Sathishkumar Samiappan ◽  
Meilun Zhou ◽  
Cary Daniel McCraine ◽  
Louis L. Wasson

The radiometric quality of remotely sensed imagery is crucial for precision agriculture applications because estimations of plant health rely on the underlying quality. Sky conditions, and specifically shadowing from clouds, are critical determinants in the quality of images that can be obtained from low-altitude sensing platforms. In this work, we first compare common deep learning approaches to classify sky conditions with regard to cloud shadows in agricultural fields using a visible spectrum camera. We then develop an artificial-intelligence-based edge computing system to fully automate the classification process. Training data consisting of 100 oblique angle images of the sky were provided to a convolutional neural network and two deep residual neural networks (ResNet18 and ResNet34) to facilitate learning two classes, namely (1) good image quality expected, and (2) degraded image quality expected. The expectation of quality stemmed from the sky condition (i.e., density, coverage, and thickness of clouds) present at the time of the image capture. These networks were tested using a set of 13,000 images. Our results demonstrated that ResNet18 and ResNet34 classifiers produced better classification accuracy when compared to a convolutional neural network classifier. The best overall accuracy was obtained by ResNet34, which was 92% accurate, with a Kappa statistic of 0.77. These results demonstrate a low-cost solution to quality control for future autonomous farming systems that will operate without human intervention and supervision.


2019 ◽  
Vol 9 (7) ◽  
pp. 1385 ◽  
Author(s):  
Luca Donati ◽  
Eleonora Iotti ◽  
Giulio Mordonini ◽  
Andrea Prati

Visual classification of commercial products is a branch of the wider fields of object detection and feature extraction in computer vision, and, in particular, it is an important step in the creative workflow in fashion industries. Automatically classifying garment features makes both designers and data experts aware of their overall production, which is fundamental in order to organize marketing campaigns, avoid duplicates, categorize apparel products for e-commerce purposes, and so on. There are many different techniques for visual classification, ranging from standard image processing to machine learning approaches: this work, made by using and testing the aforementioned approaches in collaboration with Adidas AG™, describes a real-world study aimed at automatically recognizing and classifying logos, stripes, colors, and other features of clothing, solely from final rendering images of their products. Specifically, both deep learning and image processing techniques, such as template matching, were used. The result is a novel system for image recognition and feature extraction that has a high classification accuracy and which is reliable and robust enough to be used by a company like Adidas. This paper shows the main problems and proposed solutions in the development of this system, and the experimental results on the Adidas AG™ dataset.


Sign in / Sign up

Export Citation Format

Share Document