scholarly journals Video Desnowing and Deraining via Saliency and Dual Adaptive Spatiotemporal Filtering

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7610
Author(s):  
Yongji Li ◽  
Rui Wu ◽  
Zhenhong Jia ◽  
Jie Yang ◽  
Nikola Kasabov

Outdoor vision sensing systems often struggle with poor weather conditions, such as snow and rain, which poses a great challenge to existing video desnowing and deraining methods. In this paper, we propose a novel video desnowing and deraining model that utilizes the salience information of moving objects to address this problem. First, we remove the snow and rain from the video by low-rank tensor decomposition, which makes full use of the spatial location information and the correlation between the three channels of the color video. Second, because existing algorithms often regard sparse snowflakes and rain streaks as moving objects, this paper injects salience information into moving object detection, which reduces the false alarms and missed alarms of moving objects. At the same time, feature point matching is used to mine the redundant information of moving objects in continuous frames, and a dual adaptive minimum filtering algorithm in the spatiotemporal domain is proposed by us to remove snow and rain in front of moving objects. Both qualitative and quantitative experimental results show that the proposed algorithm is more competitive than other state-of-the-art snow and rain removal methods.

2021 ◽  
Author(s):  
Bo Shen ◽  
Rakesh R Kamath ◽  
hahn choo ◽  
Zhenyu Kong

Background/foreground separation is one of the most fundamental tasks in computer vision, especially for video data. Robust PCA (RPCA) and its tensor extension, namely, Robust Tensor PCA (RTPCA), provide an effective framework for background/foreground separation by decomposing the data into low-rank and sparse components, which contain the background and the foreground (moving objects), respectively. However, in real-world applications, the video data is contaminated with noise. For example, in metal additive manufacturing (AM), the processed X-ray video to study melt pool dynamics is very noisy. RPCA and RTPCA are not able to separate the background, foreground, and noise simultaneously. As a result, the noise will contaminate the background or the foreground or both. There is a need to remove the noise from the background and foreground. To achieve the three terms decomposition, a smooth sparse Robust Tensor Decomposition (SS-RTD) model is proposed to decompose the data into static background, smooth foreground, and noise, respectively. Specifically, the static background is modeled by the low-rank tucker decomposition, the smooth foreground (moving objects) is modeled by the spatiotemporal continuity, which is enforced by the total variation regularization, and the noise is modeled by the sparsity, which is enforced by the L1 norm. An efficient algorithm based on alternating direction method of multipliers (ADMM) is implemented to solve the proposed model. Extensive experiments on both simulated and real data demonstrate that the proposed method significantly outperforms the state-of-the-art approaches for background/foreground separation in noisy cases.<br>


2013 ◽  
Vol 284-287 ◽  
pp. 3184-3188
Author(s):  
Yea Shuan Huang ◽  
Zhi Hong Ou ◽  
Hung Hsiu Yu ◽  
Hsiang Wen Hsieh

This paper presents a method for detecting feature points from an image and locating their matching correspondence points across images. The proposed method leverages a novel rapid LBP feature point detection to filter out texture-less SURF feature points. The detected feature points, also known as Non-Uniform SURF feature points, are used to match corresponding feature points from other frame images to reliably locate positions of moving objects. The proposed method consists of two processing modules: Feature Point Extraction (FPE) and Feature Point Mapping (FPM). First, FPE extracts salient feature points with Feature Transform and Feature Point Detection. FPM is then applied to generate motion vectors of each feature point with Feature Descriptor and Feature Point Matching. Experiments are conducted on both artificial template patterns and real scenes captured from moving camera at different speed settings. Experimental results show that the proposed method outperforms the commonly-used SURF feature point detection and matching approach.


2014 ◽  
Vol 1048 ◽  
pp. 173-177 ◽  
Author(s):  
Ying Mei Wang ◽  
Yan Mei Li ◽  
Wan Yue Hu

Fabric shape style is one of the most important conditions in textile appearance evaluation, and also the main factor influences customer purchasing psychology. At first, the previous fabric shape style evaluation methods are classified and summarized, measurement and evaluation method discussed from tactic and dynamic aspects. Then, companied with computer vision principle, a non-contact method for measuring fabric shape style was put forward. In this method, two high-speed CCD cameras were used to capture fabric movement dynamically, fabric sequences image were obtained in this process. Used the image processing technology include pretreatment and feature point matching to get 3D motion parameters, it can provide data supports for shape style evaluation.


Proceedings ◽  
2018 ◽  
Vol 2 (18) ◽  
pp. 1193
Author(s):  
Roi Santos ◽  
Xose Pardo ◽  
Xose Fdez-Vidal

The increasing use of autonomous UAVs inside buildings and around human-made structures demands new accurate and comprehensive representation of their operation environments. Most of the 3D scene abstraction methods use invariant feature point matching, nevertheless some sparse 3D point clouds do not concisely represent the structure of the environment. Likewise, line clouds constructed by short and redundant segments with inaccurate directions limit the understanding of scenes as those that include environments with poor texture, or whose texture resembles a repetitive pattern. The presented approach is based on observation and representation models using the straight line segments, whose resemble the limits of an urban indoor or outdoor environment. The goal of the work is to get a full method based on the matching of lines that provides a complementary approach to state-of-the-art methods when facing 3D scene representation of poor texture environments for future autonomous UAV.


2021 ◽  
Author(s):  
Junchong Huang ◽  
Wei Tian ◽  
Yongkun Wen ◽  
Zhan Chen ◽  
Yuyao Huang

Sign in / Sign up

Export Citation Format

Share Document