scholarly journals Real-Time Social Distance maintaining using Image Processing and Deep Learning

2021 ◽  
Vol 1916 (1) ◽  
pp. 012190
Author(s):  
K R Senthil Murugan ◽  
G Kavinraj ◽  
K Mohanaprasanth ◽  
Krishnan B Ragul
2021 ◽  
pp. 297-315
Author(s):  
Balaji Muthazhagan ◽  
Aparnasri Panchapakesan ◽  
Suriya Sundaramoorthy

2021 ◽  
Vol 5 ◽  
pp. 182-196
Author(s):  
Muhammad Haris Kaka Khel ◽  
Kushsairy Kadir ◽  
Waleed Albattah ◽  
Sheroz Khan ◽  
MNMM Noor ◽  
...  

Crowd management has attracted serious attention under the prevailing pandemic conditions of COVID-19, emphasizing that sick persons do not become a source of virus transmission. World Health Organization (WHO) guidelines include maintaining a safe distance and wearing a mask in gatherings as part of standard operating procedures (SOP), considered thus far the most effective preventive measures to protect against COVID-19. Several methods and strategies have been used to construct various face detection and social distance detection models. In this paper, a deep learning model is presented to detect people without masks and those not keeping a safe distance to contain the virus. It also counts individuals who violate the SOP. The proposed model employs the Single Shot Multi-box Detector as a feature extractor, followed by Spatial Pyramid Pooling (SPP) to integrate the extracted features to improve the model's detecting capabilities. The MobilenetV2 architecture as a framework for the classifier makes the model highly light, fast, and computationally efficient, allowing it to be employed in embedded devices to do real-time mask and social distance detection, which is the sole objective of this research. This paper's technique yields an accuracy score of 99% and reduces the loss to 0.04%. Doi: 10.28991/esj-2021-SPER-14 Full Text: PDF


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1664
Author(s):  
Yoon-Ki Kim ◽  
Yongsung Kim

Recently, as the amount of real-time video streaming data has increased, distributed parallel processing systems have rapidly evolved to process large-scale data. In addition, with an increase in the scale of computing resources constituting the distributed parallel processing system, the orchestration of technology has become crucial for proper management of computing resources, in terms of allocating computing resources, setting up a programming environment, and deploying user applications. In this paper, we present a new distributed parallel processing platform for real-time large-scale image processing based on deep learning model inference, called DiPLIP. It provides a scheme for large-scale real-time image inference using buffer layer and a scalable parallel processing environment according to the size of the stream image. It allows users to easily process trained deep learning models for processing real-time images in a distributed parallel processing environment at high speeds, through the distribution of the virtual machine container.


Author(s):  
P. J. Baeck ◽  
N. Lewyckyj ◽  
B. Beusen ◽  
W. Horsten ◽  
K. Pauly

<p><strong>Abstract.</strong> Detection of humans, e.g. for search and rescue operations has been enabled by the availability of compact, easy to use cameras and drones. On the other hand, aerial photogrammetry techniques for inspection applications allow for precise geographic localization and the generation of an overview orthomosaic and 3D terrain model. The proposed solution is based on nadir drone imagery and combines both deep learning and photogrammetric algorithms to detect people and position them with geographical coordinates on an overview orthomosaic and 3D terrain map. The drone image processing chain is fully automated and near real-time and therefore allows search and rescue teams to operate more efficiently in difficult to reach areas.</p>


2021 ◽  
Vol 8 (2) ◽  
pp. 15-19
Author(s):  
Julkar Nine ◽  
Rahul Mathavan

Traffic light detection and back-light recognition are essential research topics in the area of intelligent vehicles because they avoid vehicle collision and provide driver safety. Improved detection and semantic clarity may aid in the prevention of traffic accidents by self-driving cars at crowded junctions, thus improving overall driving safety. Complex traffic situations, on the other hand, make it more difficult for algorithms to identify and recognize objects. The latest state-of-the-art algorithms based on Deep Learning and Computer Vision are successfully addressing the majority of real-time problems for autonomous driving, such as detecting traffic signals, traffic signs, and pedestrians. We propose a combination of deep learning and image processing methods while using the MobileNetSSD (deep neural network architecture) model with transfer learning for real-time detection and identification of traffic lights and back-light. This inference model is obtained from frameworks such as Tensor-Flow and Tensor-Flow Lite which is trained on the COCO data. This study investigates the feasibility of executing object detection on the Raspberry Pi 3B+, a widely used embedded computing board. The algorithm’s performance is measured in terms of frames per second (FPS), accuracy, and inference time.


Author(s):  
Komala K. V. ◽  
Deepa V. P.

In the advance of the technology and implantation of Internet of Things (IoT), the realization of smart city seems to be very needed. One of the key parts of a cyber-physical system of urban life is transportation. Such mission-critical application has led to inquisitiveness in researchers to develop autonomous robots from academicians and industry. In the domain of autonomous robot, intelligent video analytics is very crucial. By the advent of deep learning many neural ¬¬¬networks-based learning approaches are considered. One of the advanced Single Shot Multibox Detector (SSD) method is exploited for real-time video/image analysis using an IOT device and vehicles/any barrier avoidance on road is done using image processing. The proposed work makes use of SSD algorithm which is responsible for object detection and image processing to control the car, based on its current input. Thus, this work aims to develop real-time barrier detection and barrier avoidance for autonomous robots using a camera and barrier avoidance sensor in an unstructured environment.


Sign in / Sign up

Export Citation Format

Share Document