Detecting and Tracking Segmentation of Moving Objects Using Graph Cut Algorithm

Author(s):  
Raviraj Pandian ◽  
Ramya A.

Real-time moving object detection, classification, and tracking capabilities are presented with system operates on both color and gray-scale video imagery from a stationary camera. It can handle object detection in indoor and outdoor environments and under changing illumination conditions. Object detection in a video is usually performed by object detectors or background subtraction techniques. The proposed method determines the threshold automatically and dynamically depending on the intensities of the pixels in the current frame. In this method, it updates the background model with learning rate depending on the differences of the pixels in the background model of the previous frame. The graph cut segmentation-based region merging algorithm approaches achieve both segmentation and optical flow computation accurately and they can work in the presence of large camera motion. The algorithm makes use of the shape of the detected objects and temporal tracking results to successfully categorize objects into pre-defined classes like human, human group, and vehicle.

2019 ◽  
Vol 14 (1) ◽  
pp. 21-30
Author(s):  
A. Shyamala ◽  
S. Selvaperumal ◽  
G. Prabhakar

Background: Moving object detection in dynamic environment video is more complex than the static environment videos. In this paper, moving objects in video sequences are detected and segmented using feature extraction based Adaptive Neuro-Fuzzy Inference System (ANFIS) classifier approach. The proposed moving object detection methodology is tested on different video sequences in both indoor and outdoor environments. Methods: This proposed methodology consists of background subtraction and classification modules. The absolute difference image is constructed in background subtraction module. The features are extracted from this difference image and these extracted features are trained and classified using ANFIS classification module. Results: The proposed moving object detection methodology is analyzed in terms of Accuracy, Recall, Average Accuracy, Precision and F-measure. The proposed moving object segmentation methodology is executed on different Central Processing Unit (CPU) processor as 1.8 GHz and 2.4 GHz for evaluating the performance during moving object segmentation. At present, some moving object detection systems used 1.8 GHz CPU processor. Recently, many systems for moving object detection are using 2.4 GHz CPU processor. Hence, CPU processors 1.8 GHz and 2.4 GHz are used in this paper for detecting the moving objects in video sequences. Table 1 shows the performance evaluation of proposed moving object detection on CPU processor 1.8 GHz (100 sequence). Table 2 shows the performance evaluation of proposed moving object detection on CPU processor 2.8 GHz (100 sequence). The average moving object detection time on CPU processor 1.8 GHz for fountain sequence is 62.5 seconds, for airport sequence is 64.7 seconds, for meeting room sequence is 71.6 seconds and for Lobby sequence is 73.5 seconds, respectively, as depicted in Table 3. The average elapsed time for moving object detection on 100 sequences is 68.07 seconds. The average moving object detection time on CPU processor 2.4 GHz for fountain sequence is 56.5 seconds, for airport sequence is 54.7 seconds, for meeting room sequence is 65.8 seconds and for Lobby sequence is 67.5 seconds, respectively, as depicted in Table 4. The average elapsed time for moving object detection on 100 sequences is 61.12 seconds. It is very clear from Table 3 and Table 4; the moving object detection time is reduced when the frequency of the CPU processor increases. Conclusion: In this paper, moving object is detected and segmented using ANFIS classifier. The proposed method initially segments the background image and then features are extracted from the threshold image. These features are trained and classified using ANFIS classification method. The proposed moving object detection method is tested on different video sequences which are obtained from different indoor and outdoor environments. The performance of the proposed moving object detection and segmentation methodology is analyzed in terms of Accuracy, Recall, Average Accuracy, Precision and F-measure.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Yizhong Yang ◽  
Qiang Zhang ◽  
Pengfei Wang ◽  
Xionglou Hu ◽  
Nengju Wu

Moving object detection in video streams is the first step of many computer vision applications. Background modeling and subtraction for moving detection is the most common technique for detecting, while how to detect moving objects correctly is still a challenge. Some methods initialize the background model at each pixel in the first N frames. However, it cannot perform well in dynamic background scenes since the background model only contains temporal features. Herein, a novel pixelwise and nonparametric moving object detection method is proposed, which contains both spatial and temporal features. The proposed method can accurately detect the dynamic background. Additionally, several new mechanisms are also proposed to maintain and update the background model. The experimental results based on image sequences in public datasets show that the proposed method provides the robustness and effectiveness in dynamic background scenes compared with the existing methods.


2020 ◽  
Vol 17 (4) ◽  
pp. 172988142094727
Author(s):  
Wenlong Zhang ◽  
Xiaoliang Sun ◽  
Qifeng Yu

Due to the clutter background motion, accurate moving object segmentation in unconstrained videos remains a significant open problem, especially for the slow-moving object. This article proposes an accurate moving object segmentation method based on robust seed selection. The seed pixels of the object and background are selected robustly by using the optical flow cues. Firstly, this article detects the moving object’s rough contour according to the local difference in the weighted orientation cues of the optical flow. Then, the detected rough contour is used to guide the object and the background seed pixel selection. The object seed pixels in the previous frame are propagated to the current frame according to the optical flow to improve the robustness of the seed selection. Finally, we adopt the random walker algorithm to segment the moving object accurately according to the selected seed pixels. Experiments on publicly available data sets indicate that the proposed method shows excellent performance in segmenting moving objects accurately in unconstraint videos.


2014 ◽  
Vol 556-562 ◽  
pp. 3549-3552
Author(s):  
Lian Fen Huang ◽  
Qing Yue Chen ◽  
Jin Feng Lin ◽  
He Zhi Lin

The key of background subtraction which is widely used in moving object detecting is to set up and update the background model. This paper presents a block background subtraction method based on ViBe, using the spatial correlation and time continuity of the video sequence. Set up the video sequence background model firstly. Then, update the background model through block processing. Finally employ the difference between the current frame and background model to extract moving objects.


Author(s):  
Minh

This paper presents an effective method for the detection of multiple moving objects from a video sequence captured by a moving surveillance camera. Moving object detection from a moving camera is difficult since camera motion and object motion are mixed. In the proposed method, we created a panoramic picture from a moving camera. After that, with each frame captured from this camera, we used the template matching method to found its place in the panoramic picture. Finally, using the image differencing method, we found out moving objects. Experimental results have shown that the proposed method had good performance with more than 80% of true detection rate on average.


2009 ◽  
Vol 06 (01) ◽  
pp. 13-21 ◽  
Author(s):  
TAEHO KIM ◽  
KANG-HYUN JO

In this paper, we propose a novel approach to detect moving objects by two background models, multiple background model (MBM) and temporal median background (TMB), from hand-taken image sequence. For this purpose, we record image sequences by hand-held camera without tripod so every frame has variation between consecutive frames. A pixel-based background model is fragile while image sequence has variation. Therefore we calculate the camera movement using correlation between two consecutive images and it helps us to generate MBM under shaking camera. The computational cost of correlation quickly increases if image resolution increases. Hence, we use edge segments to reduce computational cost. These edge segments are gathered by Sobel operator and those are distinctive spatial features to calculate similarity between two regions, belonging to current and previous images, organized by neighbors of edge segments. Based on the similarity result, we obtain a set of best matched regions, centroids of matched regions, and displacement vectors from each pair of previous and current images. Each displacement vector in a set describes the transition of each matched region in the image pair. Using the highest density of displacement vector histogram, we choose the camera motion vector, indicates camera movement between consecutive frames. According to the camera motion vector, every pixel in a current image is related to different position pixels in a previous image. The pixel relation is used to generate MBM in this paper, unlike original MBM [Xiao, M., Han, C. and Kang, K. [2006]. Proc. Int. Conf. Information Fuscon, pp. 1–7.]. The MBM algorithm classifies the variation of pixel values in frame sequence to several clusters. Classification of varying pixel values to several clusters is similar with mixture of gaussian (MOG). Nevertheless, MBM has low cost to calculate because it does not need to estimate parameter. However, MBM is not sensitive to short period changes. Therefore, we use TMB to support MBM. The experimental result shows that proposed algorithm successfully detects moving objects using background subtraction less than 25 ms per frame when camera has 2D translation.


Author(s):  
I Shieh ◽  
K F Gill

The aim of this paper is to present a novel method for processing a digitized image that will allow pertinent information to be extracted on object movement in a scene. A frame difference method locates the moving candidates in a region which is evaluated by a hypothesis testing procedure to identify accretion and deletion regions. Accretion regions are selected and used as seeds to search for moving objects in the current frame. Contour tracing is applied to establish the boundary of an accretion region which is then used to help recognize the moving object. The results of this work reveal that motion can be used as an effective cue for object detection from an image sequence.


2020 ◽  
Vol 34 (07) ◽  
pp. 11791-11798
Author(s):  
Qian Ning ◽  
Weisheng Dong ◽  
Fangfang Wu ◽  
Jinjian Wu ◽  
Jie Lin ◽  
...  

Subtracting the backgrounds from the video frames is an important step for many video analysis applications. Assuming that the backgrounds are low-rank and the foregrounds are sparse, the robust principle component analysis (RPCA)-based methods have shown promising results. However, the RPCA-based methods suffered from the scale issue, i.e., the ℓ1-sparsity regularizer fails to model the varying sparsity of the moving objects. While several efforts have been made to address this issue with advanced sparse models, previous methods cannot fully exploit the spatial-temporal correlations among the foregrounds. In this paper, we proposed a novel spatial-temporal Gaussian scale mixture (STGSM) model for foreground estimation. In the proposed STGSM model, a temporal consistent constraint is imposed over the estimated foregrounds through nonzero-means Gaussian models. Specifically, the estimates of the foregrounds obtained in the previous frame are used as the prior for these of the current frame, and nonzero means Gaussian scale mixture models (GSM) are developed. To better characterize the temporal correlations, the optical flow has been used to model the correspondences between foreground pixels in adjacent frames. The spatial correlations have also been exploited by considering that local correlated pixels should be characterized by the same STGSM model, leading to further performance improvements. Experimental results on real video datasets show that the proposed method performs comparably or even better than current state-of-the-art background subtraction methods.


2019 ◽  
Vol 8 (2) ◽  
pp. 59-65
Author(s):  
Didit Andri Jatmiko ◽  
Salita Ulitia Prini

Salah satu hal penting pada computer vision adalah ciri (feature) citra. Ciri digunakan sebagai dasar untuk mendeteksi objek, baik itu benda, manusia maupun hewan. Ciri citra yang biasa digunakan dalam penelitian antara lain tepian, sudut, bentuk maupun gradient histogram. Penelitian ini menjelaskan kinerja algoritma background subtraction pada unit pemroses berdaya rendah sebagai salah satu algoritma pada computer vision. Algoritma ini memiliki kompleksitas yang rendah dan dapat digunakan untuk mendeteksi objek sehingga berpotensi diterapkan pada kamera keamanan. Algoritma ini bekerja dengan melakukan pengurangan nilai piksel current frame dengan background model. Penelitian ini telah berhasil menerapkan algoritma dasar pengolahan citra, yaitu algoritma background subtraction pada modul ESP32. Pengujian menggunakan input citra yang memiliki dimensi 80x60 piksel dengan format warna 8bit grayscale. Ukuran frame citra 80 x 60 piksel dipilih sebagai citra uji karena keterbatasan memory DRAM EPS32 sebesar 328 KB (kilobyte). Implementasi pada modul ESP32 yang dilengkapi dengan mikroprosesor Xtensa 32-bit LX6 yang bekerja pada frekuensi 240MHz dapat memproses algoritma background subtraction 10000 kali dalam waktu ±2000ms menggunakan input citra uji tersebut. Kata Kunci – Background Subtraction; ESP32; Image Processing; Microcontroller; Object Detection.


Sign in / Sign up

Export Citation Format

Share Document