Design of a FPGA hardware architecture to detect real-time moving objects using the background subtraction algorithm

Author(s):  
Xin Ren ◽  
Yu Wang
2017 ◽  
Vol 11 (3) ◽  
pp. 98
Author(s):  
Ahmed Mustafa Taha Alzbier ◽  
Hang Cheng

As the present computer vision technology is growing up, and the multiple RGB color object tracking is considered as one of the important tasks in computer vision and technique that can be used in many applications such as surveillance in a factory production line, event organization, flow control application, analysis and sort by colors and etc. In video processing applications, variants of the background subtraction method are broadly used for the detection of moving objects in video sequences. The background subtraction is the most popular and common approach for motion detection. However , this is paper presents our investigation the first objective of the whole algorithm chain is to find the RGB color within a video. The idea from the beginning was to look for certain specific features of the patches, which would allow distinguishing red, green and blue color objects in the image. In this paper an algorithm is proposed to track the real time moving RGB color objects using kinect camera. We will use a kinect camera to capture the real time video and making an image frame from this video and extracting red, green and blue color .Here image processing is done through MATLAB for color recognition process each color. Our method can tracking accurately at 95% in real-time.


2020 ◽  
Vol 6 (6) ◽  
pp. 50
Author(s):  
Anthony Cioppa ◽  
Marc Braham ◽  
Marc Van Droogenbroeck

The method of Semantic Background Subtraction (SBS), which combines semantic segmentation and background subtraction, has recently emerged for the task of segmenting moving objects in video sequences. While SBS has been shown to improve background subtraction, a major difficulty is that it combines two streams generated at different frame rates. This results in SBS operating at the slowest frame rate of the two streams, usually being the one of the semantic segmentation algorithm. We present a method, referred to as “Asynchronous Semantic Background Subtraction” (ASBS), able to combine a semantic segmentation algorithm with any background subtraction algorithm asynchronously. It achieves performances close to that of SBS while operating at the fastest possible frame rate, being the one of the background subtraction algorithm. Our method consists in analyzing the temporal evolution of pixel features to possibly replicate the decisions previously enforced by semantics when no semantic information is computed. We showcase ASBS with several background subtraction algorithms and also add a feedback mechanism that feeds the background model of the background subtraction algorithm to upgrade its updating strategy and, consequently, enhance the decision. Experiments show that we systematically improve the performance, even when the semantic stream has a much slower frame rate than the frame rate of the background subtraction algorithm. In addition, we establish that, with the help of ASBS, a real-time background subtraction algorithm, such as ViBe, stays real time and competes with some of the best non-real-time unsupervised background subtraction algorithms such as SuBSENSE.


2020 ◽  
pp. 1811-1822
Author(s):  
Mustafa Najm ◽  
Yossra Hussein Ali

Vehicle detection (VD) plays a very essential role in Intelligent Transportation Systems (ITS) that have been intensively studied within the past years. The need for intelligent facilities expanded because the total number of vehicles is increasing rapidly in urban zones. Traffic monitoring is an important element in the intelligent transportation system, which involves the detection, classification, tracking, and counting of vehicles. One of the key advantages of traffic video detection is that it provides traffic supervisors with the means to decrease congestion and improve highway planning. Vehicle detection in videos combines image processing in real-time with computerized pattern recognition in flexible stages. The real-time processing is very critical to keep the appropriate functionality of automated or continuously working systems. VD in road traffics has numerous applications in the transportation engineering field. In this review, different automated VD systems have been surveyed,  with a focus on systems where the rectilinear stationary camera is positioned above intersections in the road rather than being mounted on the vehicle. Generally, three steps are utilized to acquire traffic condition information, including background subtraction (BS), vehicle detection and vehicle counting. First, we illustrate the concept of vehicle detection and discuss background subtraction for acquiring only moving objects. Then a variety of algorithms and techniques developed to detect vehicles are discussed beside illustrating their advantages and limitations. Finally, some limitations shared between the systems are demonstrated, such as the definition of ROI, focusing on only one aspect of detection, and the variation of accuracy with quality of videos. At the point when one can detect and classify vehicles, then it is probable to more improve the flow of the traffic and even give enormous information that can be valuable for many applications in the future.


Author(s):  
Imane Benraya ◽  
Nadjia Benblidia ◽  
Yasmine Amara

<p>Background subtraction is the first and basic stage in video analysis and smart surveillance to extract moving objects. In fact, the background subtraction library (BGSLibrary) was created by Andrews Sobral in 2012, which currently combines 43 background subtraction algorithms from the most popular and widely used in the field of video analysis. Each algorithm has its own characteristics, strengths and weaknesses in extracting moving objects. The evaluation allows the identification of these characteristics and helps researchers to design the best methods. Unfortunately, the literature lacks a comprehensive evaluation of the algorithms included in the library. Accordingly, the present work will evaluate these algorithms in the BGSLibrary through the segmentation performance, execution time and processor, so as to, achieve a perfect, comprehensive, real-time evaluation of the system. Indeed, a background modeling challenge (BMC) dataset was selected using the synthetic video with the presence of noise. Results are presented in tables, columns and foreground masks.</p>


2016 ◽  
Vol 11 (4) ◽  
pp. 324
Author(s):  
Nor Nadirah Abdul Aziz ◽  
Yasir Mohd Mustafah ◽  
Amelia Wong Azman ◽  
Amir Akramin Shafie ◽  
Muhammad Izad Yusoff ◽  
...  

Author(s):  
Parastoo Soleimani ◽  
David W. Capson ◽  
Kin Fun Li

AbstractThe first step in a scale invariant image matching system is scale space generation. Nonlinear scale space generation algorithms such as AKAZE, reduce noise and distortion in different scales while retaining the borders and key-points of the image. An FPGA-based hardware architecture for AKAZE nonlinear scale space generation is proposed to speed up this algorithm for real-time applications. The three contributions of this work are (1) mapping the two passes of the AKAZE algorithm onto a hardware architecture that realizes parallel processing of multiple sections, (2) multi-scale line buffers which can be used for different scales, and (3) a time-sharing mechanism in the memory management unit to process multiple sections of the image in parallel. We propose a time-sharing mechanism for memory management to prevent artifacts as a result of separating the process of image partitioning. We also use approximations in the algorithm to make hardware implementation more efficient while maintaining the repeatability of the detection. A frame rate of 304 frames per second for a $$1280 \times 768$$ 1280 × 768 image resolution is achieved which is favorably faster in comparison with other work.


Author(s):  
Jyh Chen ◽  
Jin-Tu Huang ◽  
Hsing-Chin Yeh ◽  
Chean-Mean Chen ◽  
Yen-Tseng Hsu

2014 ◽  
Vol 533 ◽  
pp. 218-225 ◽  
Author(s):  
Rapee Krerngkamjornkit ◽  
Milan Simic

This paper describes computer vision algorithms for detection, identification, and tracking of moving objects in a video file. The problem of multiple object tracking can be divided into two parts; detecting moving objects in each frame and associating the detections corresponding to the same object over time. The detection of moving objects uses a background subtraction algorithm based on Gaussian mixture models. The motion of each track is estimated by a Kalman filter. The video tracking algorithm was successfully tested using the BIWI walking pedestrians datasets [. The experimental results show that system can operate in real time and successfully detect, track and identify multiple targets in the presence of partial occlusion.


Sign in / Sign up

Export Citation Format

Share Document