Multiresolution Wavelet Transform Based Anisotropic Diffusion for Removing Speckle Noise in a Real-Time Vision-Based Database

Author(s):  
Rohini Mahajan ◽  
Devanand Padha

In this research article, a novel algorithm is introduced to identify the noisy pixels in video frames and correct them to enhance video quality. The technique consists of three stages: fragmentation of the video sequences to respective 2D frames, noisy pixel identification in the 2D frames, and denoising the pixels to obtain original pixels. Due to the complexity in the background and the change in appearance of the body in motion, noise variation occurs. Various researchers discuss that in order to denoise the video sequences, spatio-temporal filtering is required which identifies noise and preserves the edges. In the first stage, the video sequences are analyzed for the removal of redundant frames. This is done by using the video fragmentation process in the MATLAB toolbox. In the next stage, color smoothing is applied to the target frames for processing the flat regions and identifying all the noisy pixels. In the final stage, an improvised multiresolution wavelet transform based anisotropic diffusion filtering is applied which enhances the denoising process in horizontal, vertical, and diagonal sub bands of the video frame signal. The proposed technique can remove the speckle noise and estimate the motion by preserving the minute details of the processed video frames.

2012 ◽  
Vol 263-266 ◽  
pp. 2364-2368
Author(s):  
Dong Lin Ma ◽  
Xi Jun Zhang ◽  
Qian Mi

In this paper, a video summarization representation algorithm was proposed in compressed domain. In particular, Rough sets(RS) theory is introduced for video analysis to increase. Firstly, DCT coefficients and DC coefficients are extracted from video image sequences, so an Information System can construct with DC coefficients. Then Information System is reduced by ruduction theory of RS, the representation of the video frame is obtained by reduced DC coefficients. Finally, we can obtain the reduced Information System, i.e. the Core of Information System. Since the Core contained all the information in video sequences, and at the same time it banished redundant video frame, so it can be viewed as the effective summarization representation. Experimental results indicate that the algorithm can efficiently generate a set of summarization representative of videos sequences and enjoys following advantages. Only a subset of video frames considered during video analysis, so it can avoid the computational complexity, the video summarization representation becomes more scientific than previous methods.


Author(s):  
S. ARIVAZHAGAN ◽  
W. SYLVIA LILLY JEBARANI ◽  
G. KUMARAN

Automatic target tracking is a challenging task in video surveillance applications. Here, an offline target-tracking system in video sequences using Discrete Wavelet Transform is presented. The proposed algorithm uses co-occurrence features, derived from sub-bands of discrete wavelet transformed sub-blocks, obtained from individual video frames, to identify a seed in the frame. Then, the region-growing algorithm is applied to detect and track the target. The results of the proposed target detection and tracking system in video sequences are found to be satisfactory. The effectiveness of the target-tracking algorithm has been proved as the target gets detected, irrespective of size of the target, perspective view and cluttered environment.


2013 ◽  
Vol 397-400 ◽  
pp. 2167-2170
Author(s):  
Ming Ming Gu ◽  
Qi Jing

Compressed Sensing with Generalized Hebbian Algorithm (GHA) in Video Frame Prediction is proposed in the paper. After analyzing the inter-frame correlation among the images of video sequences, GHA, as neural network algorithm of PCA, is adopted to remove the transform coefficients with lower value according in order to implement video compressed sensing. Furthermore, for statistics of the adjacent frames are similar enough, the algorithm processes superiority in video frame prediction. Simulation results show that, the proposed algorithm can not only improve the reconstructed quality and the visual effects of the video sequence, but also save the sampling resources. Moreover, video frames can be predicted capitally through the application of the algorithm.


Author(s):  
Ali Al-Naji ◽  
Javaan Chahl

Vital parameter monitoring systems based on video camera imagery is a growing interest field in clinical and biomedical applications. Heart rate (HR) is one of the most important vital parameters of interest in a clinical diagnostic and monitoring system. This study proposed a noncontact HR and beat length measurement system based on both motion magnification and motion detection at four different regions of interest (ROIs) (wrist, arm, neck and leg). A motion magnification based on a Chebyshev filter was utilized in order to magnify heart pulses in different ROIs that are difficult to see with the naked eye. A new measuring system based on motion detection was used to measure HR and beat length by detecting rapid motion areas in the video frame sequences that represent the heart pulses and converting video frames into a corresponding logical matrix. Video quality metrics were also used to compare our magnification system with standard Eulerian video magnification to select which one has better magnification results and gives better results for the heart pulse. The 99.3% limits of agreement between the proposed system and reference measurement fall within[Formula: see text] beats/min based on Bland and Altman test. The proposed system is expected to produce new options for further noncontact information extraction.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2872
Author(s):  
Miroslav Uhrina ◽  
Anna Holesova ◽  
Juraj Bienik ◽  
Lukas Sevcik

This paper deals with the impact of content on the perceived video quality evaluated using the subjective Absolute Category Rating (ACR) method. The assessment was conducted on eight types of video sequences with diverse content obtained from the SJTU dataset. The sequences were encoded at 5 different constant bitrates in two widely video compression standards H.264/AVC and H.265/HEVC at Full HD and Ultra HD resolutions, which means 160 annotated video sequences were created. The length of Group of Pictures (GOP) was set to half the framerate value, as is typical for video intended for transmission over a noisy communication channel. The evaluation was performed in two laboratories: one situated at the University of Zilina, and the second at the VSB—Technical University in Ostrava. The results acquired in both laboratories reached/showed a high correlation. Notwithstanding the fact that the sequences with low Spatial Information (SI) and Temporal Information (TI) values reached better Mean Opinion Score (MOS) score than the sequences with higher SI and TI values, these two parameters are not sufficient for scene description, and this domain should be the subject of further research. The evaluation results led us to the conclusion that it is unnecessary to use the H.265/HEVC codec for compression of Full HD sequences and the compression efficiency of the H.265 codec by the Ultra HD resolution reaches the compression efficiency of both codecs by the Full HD resolution. This paper also includes the recommendations for minimum bitrate thresholds at which the video sequences at both resolutions retain good and fair subjectively perceived quality.


2020 ◽  
Vol 34 (07) ◽  
pp. 10607-10614 ◽  
Author(s):  
Xianhang Cheng ◽  
Zhenzhong Chen

Learning to synthesize non-existing frames from the original consecutive video frames is a challenging task. Recent kernel-based interpolation methods predict pixels with a single convolution process to replace the dependency of optical flow. However, when scene motion is larger than the pre-defined kernel size, these methods yield poor results even though they take thousands of neighboring pixels into account. To solve this problem in this paper, we propose to use deformable separable convolution (DSepConv) to adaptively estimate kernels, offsets and masks to allow the network to obtain information with much fewer but more relevant pixels. In addition, we show that the kernel-based methods and conventional flow-based methods are specific instances of the proposed DSepConv. Experimental results demonstrate that our method significantly outperforms the other kernel-based interpolation methods and shows strong performance on par or even better than the state-of-the-art algorithms both qualitatively and quantitatively.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1949
Author(s):  
Lukas Sevcik ◽  
Miroslav Voznak

Video quality evaluation needs a combined approach that includes subjective and objective metrics, testing, and monitoring of the network. This paper deals with the novel approach of mapping quality of service (QoS) to quality of experience (QoE) using QoE metrics to determine user satisfaction limits, and applying QoS tools to provide the minimum QoE expected by users. Our aim was to connect objective estimations of video quality with the subjective estimations. A comprehensive tool for the estimation of the subjective evaluation is proposed. This new idea is based on the evaluation and marking of video sequences using the sentinel flag derived from spatial information (SI) and temporal information (TI) in individual video frames. The authors of this paper created a video database for quality evaluation, and derived SI and TI from each video sequence for classifying the scenes. Video scenes from the database were evaluated by objective and subjective assessment. Based on the results, a new model for prediction of subjective quality is defined and presented in this paper. This quality is predicted using an artificial neural network based on the objective evaluation and the type of video sequences defined by qualitative parameters such as resolution, compression standard, and bitstream. Furthermore, the authors created an optimum mapping function to define the threshold for the variable bitrate setting based on the flag in the video, determining the type of scene in the proposed model. This function allows one to allocate a bitrate dynamically for a particular segment of the scene and maintains the desired quality. Our proposed model can help video service providers with the increasing the comfort of the end users. The variable bitstream ensures consistent video quality and customer satisfaction, while network resources are used effectively. The proposed model can also predict the appropriate bitrate based on the required quality of video sequences, defined using either objective or subjective assessment.


Sign in / Sign up

Export Citation Format

Share Document