scholarly journals A Hybrid Based Approach for Object Tracking in Video

Author(s):  
Amith. R ◽  
V.N. Manjunath Aradhya

<div><p><em>Tracking of moving objects in video sequences are essential for many computer vision applications &amp; it is considered as a challenging research issue due to dynamic changes in objects, shape, complex background, illumination changes and occlusion. Many traditional tracking algorithms fails to track the moving objects in real-time, this paper proposes a robust method to overcome the issue, based on the combination of particle filter and Principal Component Analysis (PCA), which predicts the position of the object in the image sequences using stable wavelet features, which in turn are extracted from multi scale 2-D discrete wavelet transform.  Later, PCA approach is used to construct the effective subspace. Similarity degree between the object model and the prediction obtained from particle filter is used to update the feature vector to handle occlusion and complex background in video frames. Experimental results obtained from the proposed method are encouraging.</em></p></div>

2011 ◽  
Vol 225-226 ◽  
pp. 403-406
Author(s):  
Xin Zhang ◽  
Xiao Tao Wang ◽  
Bing Wang ◽  
Yue Hua Gao

Human Skin Color(HSC) features have been widely used in video moving human positioning. However, in complex background video sequences, due to illumination changes or other moving objects which have similar HSC regions, the effect of moving human positioning is not satisfactory. A new method of moving human positioning applied on complex background video sequences is presented in this paper. Firstly, brightness information of the video sequence images is detected and analyzed based on HSV color model. Secondly, adopt the multi frame subtraction method to extract the moving object regions from motionless background. Then, the regions with distinctive HSC features are separated from other moving objects using the data fusion model of HSC and brightness information. Finally, identify human object among regions with HSC features according to the prior knowledge of human. The experimental results show that the method provided in this paper is effective in moving human positioning of complex background video, and has the strong illumination change adaptability and anti-jamming ability.


2017 ◽  
Vol 17 (3) ◽  
pp. 117-124 ◽  
Author(s):  
Yi Ji ◽  
Shanlin Sun ◽  
Hong-Bo Xie

AbstractDiscrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.


2011 ◽  
Vol 110-116 ◽  
pp. 3343-3350
Author(s):  
Qi Yang ◽  
Jia Fu Jiang

The complexity of the video background of moving target tracking algorithm led to the robustness of the important reasons is not high for the limitations of existing algorithms, a framework based on the movement of particle filter tracking algorithm. In order to reduce the impact of occlusion for the algorithm, the algorithm of moving objects make full use of color and motion characteristics of moving target detection, and to avoid the interference of the complex background, within the framework of particle filter in the object color histogram analysis. Finally, given an effective comparison of the calculation. Experimental results show that particle filter based target tracking algorithm can effectively remove the interference of the complex background, the context for any trace detection of high robustness.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Wenyi Wang ◽  
Jun Hu ◽  
Xiaohong Liu ◽  
Jiying Zhao ◽  
Jianwen Chen

AbstractIn this paper, we propose a hybrid super-resolution method by combining global and local dictionary training in the sparse domain. In order to present and differentiate the feature mapping in different scales, a global dictionary set is trained in multiple structure scales, and a non-linear function is used to choose the appropriate dictionary to initially reconstruct the HR image. In addition, we introduce the Gaussian blur to the LR images to eliminate a widely used but inappropriate assumption that the low resolution (LR) images are generated by bicubic interpolation from high-resolution (HR) images. In order to deal with Gaussian blur, a local dictionary is generated and iteratively updated by K-means principal component analysis (K-PCA) and gradient decent (GD) to model the blur effect during the down-sampling. Compared with the state-of-the-art SR algorithms, the experimental results reveal that the proposed method can produce sharper boundaries and suppress undesired artifacts with the present of Gaussian blur. It implies that our method could be more effect in real applications and that the HR-LR mapping relation is more complicated than bicubic interpolation.


2020 ◽  
Vol 13 (1) ◽  
pp. 60
Author(s):  
Chenjie Wang ◽  
Chengyuan Li ◽  
Jun Liu ◽  
Bin Luo ◽  
Xin Su ◽  
...  

Most scenes in practical applications are dynamic scenes containing moving objects, so accurately segmenting moving objects is crucial for many computer vision applications. In order to efficiently segment all the moving objects in the scene, regardless of whether the object has a predefined semantic label, we propose a two-level nested octave U-structure network with a multi-scale attention mechanism, called U2-ONet. U2-ONet takes two RGB frames, the optical flow between these frames, and the instance segmentation of the frames as inputs. Each stage of U2-ONet is filled with the newly designed octave residual U-block (ORSU block) to enhance the ability to obtain more contextual information at different scales while reducing the spatial redundancy of the feature maps. In order to efficiently train the multi-scale deep network, we introduce a hierarchical training supervision strategy that calculates the loss at each level while adding knowledge-matching loss to keep the optimization consistent. The experimental results show that the proposed U2-ONet method can achieve a state-of-the-art performance in several general moving object segmentation datasets.


2021 ◽  
Vol 13 (3) ◽  
pp. 335
Author(s):  
Yuhao Qing ◽  
Wenyi Liu

In recent years, image classification on hyperspectral imagery utilizing deep learning algorithms has attained good results. Thus, spurred by that finding and to further improve the deep learning classification accuracy, we propose a multi-scale residual convolutional neural network model fused with an efficient channel attention network (MRA-NET) that is appropriate for hyperspectral image classification. The suggested technique comprises a multi-staged architecture, where initially the spectral information of the hyperspectral image is reduced into a two-dimensional tensor, utilizing a principal component analysis (PCA) scheme. Then, the constructed low-dimensional image is input to our proposed ECA-NET deep network, which exploits the advantages of its core components, i.e., multi-scale residual structure and attention mechanisms. We evaluate the performance of the proposed MRA-NET on three public available hyperspectral datasets and demonstrate that, overall, the classification accuracy of our method is 99.82 %, 99.81%, and 99.37, respectively, which is higher compared to the corresponding accuracy of current networks such as 3D convolutional neural network (CNN), three-dimensional residual convolution structure (RES-3D-CNN), and space–spectrum joint deep network (SSRN).


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2848 ◽  
Author(s):  
Leonel Rosas-Arias ◽  
Jose Portillo-Portillo ◽  
Aldo Hernandez-Suarez ◽  
Jesus Olivares-Mercado ◽  
Gabriel Sanchez-Perez ◽  
...  

The counting of vehicles plays an important role in measuring the behavior patterns of traffic flow in cities, as streets and avenues can get crowded easily. To address this problem, some Intelligent Transport Systems (ITSs) have been implemented in order to count vehicles with already established video surveillance infrastructure. With this in mind, in this paper, we present an on-line learning methodology for counting vehicles in video sequences based on Incremental Principal Component Analysis (Incremental PCA). This incremental learning method allows us to identify the maximum variability (i.e., motion detection) between a previous block of frames and the actual one by using only the first projected eigenvector. Once the projected image is obtained, we apply dynamic thresholding to perform image binarization. Then, a series of post-processing steps are applied to enhance the binary image containing the objects in motion. Finally, we count the number of vehicles by implementing a virtual detection line in each of the road lanes. These lines determine the instants where the vehicles pass completely through them. Results show that our proposed methodology is able to count vehicles with 96.6% accuracy at 26 frames per second on average—dealing with both camera jitter and sudden illumination changes caused by the environment and the camera auto exposure.


Sign in / Sign up

Export Citation Format

Share Document