scholarly journals Anomaly Detection for Video Surveillance

Author(s):  
Jagruti Tatiya ◽  
Riya Makhija ◽  
Mrunmay Pathe ◽  
Sarika Late ◽  
Prof. Mrunal Pathak

Anomaly Detection is system which identifies inappropriate human behavior. One of the major problems in computer vision is identifying inappropriate human behavior. It is crucial as activity detection can help many numbers of applications. It can benefit applications like image monitoring, sign language recognization, object pursue and many more. Many alternatives are there such as low-cost depth sensors, but they do have some drawbacks such as limited indoor use also with lower resolution and clamorous depth information from deep images, it becomes nearly impossible to assess human poses. In order to resolve the above issues, the proposed system plans to utilize neural networks. One of the major research area is to recognize suspicious human behavior in video monitoring, in the field of computer vision. Several surveillance cameras are situated at places like airports, banks, bus station, malls, railway station, colleges, schools, etc to detect suspicious activities such as murder, heist, accidents, etc. It is a tedious job to detect and monitor these activities in crowded places, to trace real time human behavior and classify it into ordinary and unexpected scenarios the system needs to have a smart video surveillance. The experimental results show that the proposed methodology could assuredly detect the unexpected events in the video.

Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 2958
Author(s):  
Antonio Carlos Cob-Parro ◽  
Cristina Losada-Gutiérrez ◽  
Marta Marrón-Romera ◽  
Alfredo Gardel-Vicente ◽  
Ignacio Bravo-Muñoz

New processing methods based on artificial intelligence (AI) and deep learning are replacing traditional computer vision algorithms. The more advanced systems can process huge amounts of data in large computing facilities. In contrast, this paper presents a smart video surveillance system executing AI algorithms in low power consumption embedded devices. The computer vision algorithm, typical for surveillance applications, aims to detect, count and track people’s movements in the area. This application requires a distributed smart camera system. The proposed AI application allows detecting people in the surveillance area using a MobileNet-SSD architecture. In addition, using a robust Kalman filter bank, the algorithm can keep track of people in the video also providing people counting information. The detection results are excellent considering the constraints imposed on the process. The selected architecture for the edge node is based on a UpSquared2 device that includes a vision processor unit (VPU) capable of accelerating the AI CNN inference. The results section provides information about the image processing time when multiple video cameras are connected to the same edge node, people detection precision and recall curves, and the energy consumption of the system. The discussion of results shows the usefulness of deploying this smart camera node throughout a distributed surveillance system.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 517
Author(s):  
Seong-heum Kim ◽  
Youngbae Hwang

Owing to recent advancements in deep learning methods and relevant databases, it is becoming increasingly easier to recognize 3D objects using only RGB images from single viewpoints. This study investigates the major breakthroughs and current progress in deep learning-based monocular 3D object detection. For relatively low-cost data acquisition systems without depth sensors or cameras at multiple viewpoints, we first consider existing databases with 2D RGB photos and their relevant attributes. Based on this simple sensor modality for practical applications, deep learning-based monocular 3D object detection methods that overcome significant research challenges are categorized and summarized. We present the key concepts and detailed descriptions of representative single-stage and multiple-stage detection solutions. In addition, we discuss the effectiveness of the detection models on their baseline benchmarks. Finally, we explore several directions for future research on monocular 3D object detection.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 291 ◽  
Author(s):  
Hamdi Sahloul ◽  
Shouhei Shirafuji ◽  
Jun Ota

Local image features are invariant to in-plane rotations and robust to minor viewpoint changes. However, the current detectors and descriptors for local image features fail to accommodate out-of-plane rotations larger than 25°–30°. Invariance to such viewpoint changes is essential for numerous applications, including wide baseline matching, 6D pose estimation, and object reconstruction. In this study, we present a general embedding that wraps a detector/descriptor pair in order to increase viewpoint invariance by exploiting input depth maps. The proposed embedding locates smooth surfaces within the input RGB-D images and projects them into a viewpoint invariant representation, enabling the detection and description of more viewpoint invariant features. Our embedding can be utilized with different combinations of descriptor/detector pairs, according to the desired application. Using synthetic and real-world objects, we evaluated the viewpoint invariance of various detectors and descriptors, for both standalone and embedded approaches. While standalone local image features fail to accommodate average viewpoint changes beyond 33.3°, our proposed embedding boosted the viewpoint invariance to different levels, depending on the scene geometry. Objects with distinct surface discontinuities were on average invariant up to 52.8°, and the overall average for all evaluated datasets was 45.4°. Similarly, out of a total of 140 combinations involving 20 local image features and various objects with distinct surface discontinuities, only a single standalone local image feature exceeded the goal of 60° viewpoint difference in just two combinations, as compared with 19 different local image features succeeding in 73 combinations when wrapped in the proposed embedding. Furthermore, the proposed approach operates robustly in the presence of input depth noise, even that of low-cost commodity depth sensors, and well beyond.


Author(s):  
Maxwell K. Micali ◽  
Hayley M. Cashdollar ◽  
Zachary T. Gima ◽  
Mitchell T. Westwood

While CNC programmers have powerful tools to develop optimized toolpaths and machining plans, these efforts can be wholly undermined by something as simple as human operator error during fixturing. This project addresses that potential operator error with a computer vision approach to provide coarse, closed-loop control between fixturing and machining processes. Prior to starting the machining cycle, a sensor suite detects the geometry that is currently fixtured using computer vision algorithms and compare this geometry to a CAD reference. If the detected and reference geometries are not similar, the machining cycle will not start, and an alarm will be raised. The outcome of this project is the proof of concept of a low-cost, machine/controller agnostic solution that is applied to CNC milling machines. The Workpiece Verification System (WVS) prototype implemented in this work cost a total of $100 to build, and all of the processing is performed on the self-contained platform. This solution has additional applications beyond milling that the authors are exploring.


2013 ◽  
Vol 464 ◽  
pp. 387-390
Author(s):  
Wei Hua Wang

The analysis and understand of human behavior is broad application in the computer vision domain, modeling the human pose is one of the key technology. In order to simplify the model of the human pose and expediently describe the human pose, a lot of condition was appended to confine the process of human pose modeling or the application environments in the current research. In this paper, a new method for modeling the human pose was proposed. The human pose was modeled by the structural relation according to the physiological structural, the advantages of the model are the independent of move, the independent of scale of the human image and the dependent of view angle, it can be used to modeling the human behavior in video.


2017 ◽  
Vol 107 (09) ◽  
pp. 572-577
Author(s):  
B. Prof. Lorenz ◽  
I. Kaltenmark

In modernen Produktionen ist Lean Manufacturing einer der wichtigsten Treiber für Produktivitätssteigerungen. Durch neue Entwicklungen im Bereich Industrie 4.0 können Impulse im Lean Manufacturing gegeben werden. An der OTH Regensburg wird getestet, wie kostengünstige Kamerasysteme helfen können, Verschwendungen sichtbar zu machen und zu minimieren. Es zeigt sich, dass auch mit geringen Investitionskosten neue Potentiale zur Verschwendungsreduktion aufgedeckt werden können.   In modern production lean manufacturing is one of the most effective drivers for productivity. Due to new developments in the Industrie 4.0-campaign new impulses can be given into lean manufacturing. Experiments at OTH Regensburg indicate that a low-cost camera system can help to make waste visible and minimize it. This shows that with low invest costs, new potentials for waste reduction can be revealed.


Sign in / Sign up

Export Citation Format

Share Document