background subtraction
Recently Published Documents


TOTAL DOCUMENTS

1759
(FIVE YEARS 326)

H-INDEX

63
(FIVE YEARS 6)

Author(s):  
Badri Narayan Subudhi ◽  
Manoj Kumar Panda ◽  
T. Veerakumar ◽  
Vinit Jakhetiya ◽  
S. Esakkirajan

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8374
Author(s):  
Yupei Zhang ◽  
Kwok-Leung Chan

Detecting saliency in videos is a fundamental step in many computer vision systems. Saliency is the significant target(s) in the video. The object of interest is further analyzed for high-level applications. The segregation of saliency and the background can be made if they exhibit different visual cues. Therefore, saliency detection is often formulated as background subtraction. However, saliency detection is challenging. For instance, dynamic background can result in false positive errors. In another scenario, camouflage will result in false negative errors. With moving cameras, the captured scenes are even more complicated to handle. We propose a new framework, called saliency detection via background model completion (SD-BMC), that comprises a background modeler and a deep learning background/foreground segmentation network. The background modeler generates an initial clean background image from a short image sequence. Based on the idea of video completion, a good background frame can be synthesized with the co-existence of changing background and moving objects. We adopt the background/foreground segmenter, which was pre-trained with a specific video dataset. It can also detect saliency in unseen videos. The background modeler can adjust the background image dynamically when the background/foreground segmenter output deteriorates during processing a long video. To the best of our knowledge, our framework is the first one to adopt video completion for background modeling and saliency detection in videos captured by moving cameras. The F-measure results, obtained from the pan-tilt-zoom (PTZ) videos, show that our proposed framework outperforms some deep learning-based background subtraction models by 11% or more. With more challenging videos, our framework also outperforms many high-ranking background subtraction methods by more than 3%.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yan Hu ◽  
Yong Xu

There are many drawbacks such as clustering, background updating, inaccurate testing results, and low anti-interference performance in traditional moving target detection theory. In our study, a background subtraction method to automatically capture the basketball shooting trajectory was used to eliminate the drawbacks of the fixed-point shooting system such as cumbersome installation and time and manpower consumption. It also can improve the accuracy and efficiency of moving target detection. We also synthetically compared to common methods including the optical flow method and interframe difference method. Results showed that the background subtraction method has better accuracy with an accuracy rate over about 90% than the interframe subtraction method (88%) and the optimal flow method (85%) and presents excellent robustness with considering variable speed and nonrigid objects. Meanwhile, the automatic detection system for basketball shooting based on background subtraction is built by coupling background subtraction with detection characteristics. The system detection speed built is further accelerated, and the image denoising is improved. The trajectory error rate is about 0.3, 0.4, and 0.5 for the background subtraction method, interframe subtraction method, and optimal flow method, respectively.


2021 ◽  
pp. 074873042110628
Author(s):  
Blanca Martin-Burgos ◽  
Wanqi Wang ◽  
Ivana William ◽  
Selma Tir ◽  
Innus Mohammad ◽  
...  

Circadian rhythms are driven by daily oscillations of gene expression. An important tool for studying cellular and tissue circadian rhythms is the use of a gene reporter, such as bioluminescence from the reporter gene luciferase controlled by a rhythmically expressed gene of interest. Here we describe methods that allow measurement of circadian bioluminescence from a freely moving mouse housed in a standard cage. Using a LumiCycle In Vivo (Actimetrics), we determined conditions that allow detection of circadian rhythms of bioluminescence from the PER2 reporter, PER2::LUC, in freely behaving mice. The LumiCycle In Vivo applies a background subtraction that corrects for effects of room temperature on photomultiplier tube (PMT) output. We tested delivery of d-luciferin via a subcutaneous minipump and in the drinking water. We demonstrate spikes in bioluminescence associated with drinking bouts. Further, we demonstrate that a synthetic luciferase substrate, CycLuc1, can support circadian rhythms of bioluminescence, even when delivered at a lower concentration than d-luciferin, and can support longer-term studies. A small difference in phase of the PER2::LUC bioluminescence rhythms, with females phase leading males, can be detected with this technique. We share our analysis scripts and suggestions for further improvements in this method. This approach will be straightforward to apply to mice with tissue-specific reporters, allowing insights into responses of specific peripheral clocks to perturbations such as environmental or pharmacological manipulations.


2021 ◽  
Vol 10 (6) ◽  
pp. 3211-3219
Author(s):  
Awang Hendrianto Pratomo ◽  
Wilis Kaswidjanti ◽  
Alek Setiyo Nugroho ◽  
Shoffan Saifullah

Manual system vehicle parking makes finding vacant parking lots difficult, so it has to check directly to the vacant space. If many people do parking, then the time needed for it is very much or requires many people to handle it. This research develops a real-time parking system to detect parking. The system is designed using the HSV color segmentation method in determining the background image. In addition, the detection process uses the background subtraction method. Applying these two methods requires image preprocessing using several methods such as grayscaling, blurring (low-pass filter). In addition, it is followed by a thresholding and filtering process to get the best image in the detection process. In the process, there is a determination of the ROI to determine the focus area of the object identified as empty parking. The parking detection process produces the best average accuracy of 95.76%. The minimum threshold value of 255 pixels is 0.4. This value is the best value from 33 test data in several criteria, such as the time of capture, composition and color of the vehicle, the shape of the shadow of the object’s environment, and the intensity of light. This parking detection system can be implemented in real-time to determine the position of an empty place.


Author(s):  
Imane Benraya ◽  
Nadjia Benblidia ◽  
Yasmine Amara

<p>Background subtraction is the first and basic stage in video analysis and smart surveillance to extract moving objects. In fact, the background subtraction library (BGSLibrary) was created by Andrews Sobral in 2012, which currently combines 43 background subtraction algorithms from the most popular and widely used in the field of video analysis. Each algorithm has its own characteristics, strengths and weaknesses in extracting moving objects. The evaluation allows the identification of these characteristics and helps researchers to design the best methods. Unfortunately, the literature lacks a comprehensive evaluation of the algorithms included in the library. Accordingly, the present work will evaluate these algorithms in the BGSLibrary through the segmentation performance, execution time and processor, so as to, achieve a perfect, comprehensive, real-time evaluation of the system. Indeed, a background modeling challenge (BMC) dataset was selected using the synthetic video with the presence of noise. Results are presented in tables, columns and foreground masks.</p>


Sign in / Sign up

Export Citation Format

Share Document