Deep learning of group activities from partially observable surveillance video streams (Conference Presentation)

Author(s):  
Amir Shirkhodaie
2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Optik ◽  
2020 ◽  
Vol 202 ◽  
pp. 163675
Author(s):  
Ya-Wen Hsu ◽  
Ting-Yen Wang ◽  
Jau-Woei Perng

A framework to perform video examination is proposed utilizing a powerfully tuned convolutional arrange. Recordings are gotten from distributed storage, preprocessed, and a model for supporting order is created on these video streams utilizing cloud-based framework. A key spotlight in this paper is on tuning hyper-parameters related with the profound learning calculation used to build the model. We further propose a programmed video object order pipeline to approve the framework. The scientific model used to help hyper-parameter tuning improves execution of the proposed pipeline, and results of different parameters on framework's presentation is analyzed. Along these lines, the parameters that contribute toward the most ideal presentation are chosen for the video object order pipeline. Our examination based approval uncovers an exactness and accuracy of 97% and 96%, separately. The framework demonstrated to be adaptable, strong, and adjustable for a wide range of utilizations.


2021 ◽  
Vol 2 (4) ◽  
Author(s):  
S. Vasavi ◽  
P. Vineela ◽  
S. Venkat Raman

2021 ◽  
Vol 6 (22) ◽  
pp. 60-70
Author(s):  
Bushra Yasmeen ◽  
Haslina Arshad ◽  
Hameedur Rahman

Security has recently been given the highest priority with the rise in the number of antisocial activations taking place. To continuously track individuals and their interactions, CCTVs have been built in several ways. Every person is recorded on an image on average 30 times a day in a developed world with a community of 1.6 billion. The resolution of 710*570 captured at knitting will approximate 20 GB per day. Constant monitoring of human data makes it hard to judge whether the incident is an irregular one, and it is an almost uphill struggle when a population and its full support are needed. In this paper, we make a system for the detection of suspicious activity using CCTV surveillance video. There seems to be a need to demonstrate in which frame the behavior is located as well as which section of it allows the faster judgment of the suspicious activity is unusual. This is done by converting the video into frames and analyzing the persons and their activates from the processed frames. We have accepted wide support from Machine learning and Deep Learning Algorithms to make it possible. To automate that process, first, we need to build a training model using a large number of images (all possible images which describe features of suspicious activities) and a “Convolution Neural Network‟ using the Tensor Flow Python module. We can then upload any video into the application, and it will extract frames from the uploaded video and then that frame will be applied on a training model to predict its class such as suspicious or normal.


Author(s):  
Xuan Zhao ◽  
Yu Chen ◽  
Erik Blasch ◽  
Liwen Zhang ◽  
Genshe Chen

2020 ◽  
Vol 38 (5) ◽  
pp. 6291-6298 ◽  
Author(s):  
Sourav Ravikumar ◽  
Dayanand Vinod ◽  
Gowtham Ramesh ◽  
Sini Raj Pulari ◽  
Senthilkumar Mathi

Sign in / Sign up

Export Citation Format

Share Document