Single Color Image De-Raining in Outdoor Vision Systems: A 2-Step Approach

Author(s):  
C.B Aiswarya ◽  
K. S. Angel Viji
2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


It is well-known that a bad weather, e.g. haze, rain, or snow affects severely the quality of the captured images or videos. Also raindrops adhered to a glass window or camera lens can severely affect the visibility of background scene and degrade the image quality, which consequently degrades the performance of many image processing and computer vision system algorithms. These algorithms are used in various applications such as object detection, tracking, recognition, and surveillance also in navigation. Rain removal from a video or a single image has been an active research topic over the past decade. Today, it continues to draw attentions in outdoor vision systems (e.g. surveillance) where the ultimate goal is to produce a clear and clean image or video. The most critical task here is to separate rain component from the other part. For that purpose, we are proposing an efficient algorithm to remove rain from a color image.


Metrologiya ◽  
2020 ◽  
pp. 15-37
Author(s):  
L. P. Bass ◽  
Yu. A. Plastinin ◽  
I. Yu. Skryabysheva

Use of the technical (computer) vision systems for Earth remote sensing is considered. An overview of software and hardware used in computer vision systems for processing satellite images is submitted. Algorithmic methods of the data processing with use of the trained neural network are described. Examples of the algorithmic processing of satellite images by means of artificial convolution neural networks are given. Ways of accuracy increase of satellite images recognition are defined. Practical applications of convolution neural networks onboard microsatellites for Earth remote sensing are presented.


2015 ◽  
Vol 10 (11) ◽  
pp. 1127
Author(s):  
Nidaa Hasan Abbas ◽  
Sharifah Mumtazah Syed Ahmad ◽  
Wan Azizun Wan Adnan ◽  
Abed Rahman Bin Ramli ◽  
Sajida Parveen

2019 ◽  
Vol 2019 (1) ◽  
pp. 95-98
Author(s):  
Hans Jakob Rivertz

In this paper we give a new method to find a grayscale image from a color image. The idea is that the structure tensors of the grayscale image and the color image should be as equal as possible. This is measured by the energy of the tensor differences. We deduce an Euler-Lagrange equation and a second variational inequality. The second variational inequality is remarkably simple in its form. Our equation does not involve several steps, such as finding a gradient first and then integrating it. We show that if a color image is at least two times continuous differentiable, the resulting grayscale image is not necessarily two times continuous differentiable.


2018 ◽  
Vol 2018 (16) ◽  
pp. 296-1-296-5
Author(s):  
Megan M. Fuller ◽  
Jae S. Lim
Keyword(s):  

2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


Sign in / Sign up

Export Citation Format

Share Document