Automatic Helmet Detection in Real-Time and Surveillance Video

Author(s):  
Shubham Kumar ◽  
Nisha Neware ◽  
Atul Jain ◽  
Debabrata Swain ◽  
Puran Singh
Keyword(s):  
Author(s):  
Seyed Yahya Nikouei ◽  
Ronghua Xu ◽  
Deeraj Nagothu ◽  
Yu Chen ◽  
Alexander Aved ◽  
...  

2021 ◽  
Author(s):  
Daniele Berardini ◽  
Adriano Mancini ◽  
Primo Zingaretti ◽  
Sara Moccia

Abstract Nowadays, video surveillance has a crucial role. Analyzing surveillance videos is, however, a time consuming and tiresome procedure. In the last years, artificial intelligence paved the way for automatic and accurate surveillance-video analysis. In parallel to the development of artificial-intelligence methodologies, edge computing is becoming an active field of research with the final goal to provide cost-effective and real time deployment of the developed methodologies. In this work, we present an edge artificial intelligence application to video surveillance. Our approach relies on a set of four IP cameras, which acquire video frames that are processed on the edge using the NVIDIA® Jetson Nano. A state-of-the-art deep-learning model, called Single Shot multibox Detector (SSD) MobileNetV2 network, is used to perform object and people detection in real-time. The proposed infrastructure obtained an inference speed of ∼10.0 Frames per Second (FPS) for each parallel video stream. These results prompt the possibility of translating our work into a real word scenario. The integration of the presented application into a wider monitoring system with a central unit could bring benefits to the overall infrastructure. Indeed our application could send only video-related high-level information to the central unit, allowing it to combine information with data coming from other sensing devices without unuseful data overload. This would ensure a fast response in case of emergency or detected anomalies. We hope this work will contribute to stimulate the research in the field of edge artificial intelligence for video surveillance.


2010 ◽  
Author(s):  
Aldo Camargo ◽  
Kyle Anderson ◽  
Yi Wang ◽  
Richard R. Schultz ◽  
Ronald A. Fevig

2020 ◽  
Author(s):  
Jingjing Li ◽  
Jie Zeng ◽  
Keyu Hou ◽  
Jin Zhou ◽  
Rui Wang

Due to the importance of offline consumer behavior, more and more people had begun to study consumer behavior in store. In offline consumer behavior research, the application of video analysis technology was the most direct and convenient. Recognizing human posture was a key technology in video analysis. The OpenPose algorithm was one of the advantageous technologies that could accurately recognize multi-person poses in different environments in real time, so we used it innovatively to study consumer behavior in store. We hope to develop the potential of this application in the research of consumer behavior in store in the footwear retail industry by the technical advantages of the OpenPose algorithm. In our study, we first used an OpenPose algorithm to estimate multi-person pose and detection behavior, and then processed and recognized the videos collected in the store. We collected a week's surveillance video of a Red Dragonfly offline store from July 10 to July 16, 2020 in China. The specific process was to calibrate the area in the selected camera screen, then the algorithm performs identification and detection, and finally output in-store consumption Behavioral data. Our research results not only verified the feasibility of this application in offline retailing stores, but the data results also indicated that consumers tend to enter the store from the right, staying concentrated in the middle and back of the store. These results may be affected by the store space, product display, and staff guidance and reception.


2015 ◽  
Vol 15 (2) ◽  
pp. 321
Author(s):  
Chen Yong ◽  
Shuai Feng ◽  
Zhan Di

<p><em>In low light environment, the surveillance video image has lower contrast</em><em>,</em><em>less information and uneven brightness. To solve this problem, this paper puts forward a contrast resolution compensation algorithm based on human visual perception model. It extracts Y component from the YUV video image acquired by camera originally to subtract contrast feature parameters, then makes a proportional integral type contrast resolution compensation for low light pixels in Y component and makes index contrast resolution compensation for high light pixels adaptively to enhance brightness of the video image while maintains the U and V components. Then it compresses the video images and transmits them via internet. Finally, it decodes and displays the video image on the device of intelligent surveillance system. The experimental results show that, the algorithm can effectively improve the contrast resolution of the video image and maintain the color of video image well. It also can meet the real-time requirement of video monitoring.</em><em></em></p>


Sign in / Sign up

Export Citation Format

Share Document