scholarly journals Real-Time Monitoring of COVID-19 SOP in Public Gathering Using Deep Learning Technique

2021 ◽  
Vol 5 ◽  
pp. 182-196
Author(s):  
Muhammad Haris Kaka Khel ◽  
Kushsairy Kadir ◽  
Waleed Albattah ◽  
Sheroz Khan ◽  
MNMM Noor ◽  
...  

Crowd management has attracted serious attention under the prevailing pandemic conditions of COVID-19, emphasizing that sick persons do not become a source of virus transmission. World Health Organization (WHO) guidelines include maintaining a safe distance and wearing a mask in gatherings as part of standard operating procedures (SOP), considered thus far the most effective preventive measures to protect against COVID-19. Several methods and strategies have been used to construct various face detection and social distance detection models. In this paper, a deep learning model is presented to detect people without masks and those not keeping a safe distance to contain the virus. It also counts individuals who violate the SOP. The proposed model employs the Single Shot Multi-box Detector as a feature extractor, followed by Spatial Pyramid Pooling (SPP) to integrate the extracted features to improve the model's detecting capabilities. The MobilenetV2 architecture as a framework for the classifier makes the model highly light, fast, and computationally efficient, allowing it to be employed in embedded devices to do real-time mask and social distance detection, which is the sole objective of this research. This paper's technique yields an accuracy score of 99% and reduces the loss to 0.04%. Doi: 10.28991/esj-2021-SPER-14 Full Text: PDF

Author(s):  
Ni Nyoman Ayu Marlina ◽  
Denden Mohammad Ariffin ◽  
Arief Suryadi Satyawan ◽  
Mohammed Ikrom Asysyakuur ◽  
Muhammad Farhan Utamajaya ◽  
...  
Keyword(s):  

Seiring dengan perkembangan zaman, setiap produsen mobil selalu menciptakan produkterbarunya lebih canggih. Ide ini kemudian melahirkan konsep kendaraan listrik otonom (KLO). Hal ini dimaksudkan untuk selalu menghadirkan kendaraan yang dapat memenuhi selera konsumen yang terus berkembang, disamping juga ramah lingkungan Kehadiran kendaraan listrik otonom pastinya akan dialami oleh Indonesia yang masyarakatnya sudah mulai bergantung pada alat transportasi mobil. Oleh sebab itu situasi ini mengharuskan kita bersiap menghadapi era Mobility in Society 5.0, dimana kita harus dapat menguasai teknologi pendukungnya. Kendaraan litrik otonom dapat terealisasi jika sistemnya mampu mendeteksi objek dengan baik. Oleh sebab itu pada penelitian ini dilakukan pengembangan sistem pendeteksi pejalan kaki berbasis deep learning dan memanfaatkan gambar 360°. Sistem software deteksi objek yang dibangun menggunakan Single Shot Multibox Detector (SSD) MobilenetV1, sedangkan hardware yang digunakan untuk pengembangan ini adalah Jetson AGX Xavier. Proses pengembangan yang dilakukan dimulai dari pengambilan gambar 360° ternormalisasi berisi informasi pejalan kaki di area kampus Universitas Nurtanio yang dipergunakan sebagai dataset dan data pengujian, melatih SSD MobileNetV1 dengan dataset tersebut (19.038), dan menguji model software terlatih secara real-time maupun offline.Hasil pengujian offline terhadap 735 gambar 360° pada kondisi siang hari menunjukan bahwa55,5% gambar dapat terdeteksi sempurna, sedangkan dari 595 gambar 360° pada kondisi sore hari, 51,2% gambar dapat terdeteksi sempurna. Pada pengujian secara real-time diperoleh kepastian bahwa 98% pejalan kaki pada siang hari terdeteksi, sedangkan pada sore hari hanya 95%. Waktu proses rata-rata pada sebuah gambar kondisi siang hari adalah 32,81283 ms jika menggunakan CPU, sedangkanjika menggunakan GPU adalah 32,79766 ms. Untuk sebuah gambar dengan informasi yang sama pada kondisi sore hari diperoleh waktu proses 37,42598 ms jika menggunakan CPU, sedangkan jika menggunakan GPU adalah 37,45174 ms.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


2019 ◽  
Vol 11 (7) ◽  
pp. 786 ◽  
Author(s):  
Yang-Lang Chang ◽  
Amare Anagaw ◽  
Lena Chang ◽  
Yi Wang ◽  
Chih-Yu Hsiao ◽  
...  

Synthetic aperture radar (SAR) imagery has been used as a promising data source for monitoring maritime activities, and its application for oil and ship detection has been the focus of many previous research studies. Many object detection methods ranging from traditional to deep learning approaches have been proposed. However, majority of them are computationally intensive and have accuracy problems. The huge volume of the remote sensing data also brings a challenge for real time object detection. To mitigate this problem a high performance computing (HPC) method has been proposed to accelerate SAR imagery analysis, utilizing the GPU based computing methods. In this paper, we propose an enhanced GPU based deep learning method to detect ship from the SAR images. The You Only Look Once version 2 (YOLOv2) deep learning framework is proposed to model the architecture and training the model. YOLOv2 is a state-of-the-art real-time object detection system, which outperforms Faster Region-Based Convolutional Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) methods. Additionally, in order to reduce computational time with relatively competitive detection accuracy, we develop a new architecture with less number of layers called YOLOv2-reduced. In the experiment, we use two types of datasets: A SAR ship detection dataset (SSDD) dataset and a Diversified SAR Ship Detection Dataset (DSSDD). These two datasets were used for training and testing purposes. YOLOv2 test results showed an increase in accuracy of ship detection as well as a noticeable reduction in computational time compared to Faster R-CNN. From the experimental results, the proposed YOLOv2 architecture achieves an accuracy of 90.05% and 89.13% on the SSDD and DSSDD datasets respectively. The proposed YOLOv2-reduced architecture has a similarly competent detection performance as YOLOv2, but with less computational time on a NVIDIA TITAN X GPU. The experimental results shows that the deep learning can make a big leap forward in improving the performance of SAR image ship detection.


2019 ◽  
Vol 11 (4) ◽  
Author(s):  
Afandi Nur Aziz Thohari ◽  
Rifki Adhitama

Indonesia is a country that has a variety of cultures, one of which is wayang kulit. This typical javanese performance art must continue to be preserved so that to be known by future generations. There are many wayang figures in Indonesia, and the most famous is punakawan. Wayang punakawan consists of four character namely semar, gareng petruk, and bagong. To preserve wayang punakawan to be known by the next generation, then in this study created a system that is able to identify real-time punakawan object using deep learning technology. The method that used is Single Shot Multiple Detector (SSD) as one of the models of deep learning that has a good ability in classifying data with three-dimensional structures such as real-time video. SSD model with MobileNet layer can work in slight computation, so that it can be run in real-time system. To classify object there are two steps that must be done such as training process and testing process. Training process takes 28 hours with 100.000 steps of iteration.The result of training process is a model which used to identify object. Based on the test result obtained an accuracy to detect object was 98,86%. This prove that the system has been able to optimize object in real-time accurately.


Author(s):  
Balaji G V

Object Detection using SSD (Single Shot Detector) and MobileNets are efficient because this technique detects objects quickly with less resourses without sacrificing performance. In this every class of item for which the classification algorithm has been trained generates a bounding box and an annotation describing that class of object. This provides the foundation for creating several types of analytical features such as the volume of traffic in a certain area over time or the entire population in an area is real-time detection and categorization of objects from video data.


2021 ◽  
Vol 12 (1) ◽  
pp. 25-31
Author(s):  
Pranad Munjal ◽  
Vikas Rattan ◽  
Rajat Dua ◽  
Varun Malik

The outbreak of COVID-19 has taught everyone the importance of face masks in their lives. SARS-COV-2(Severe Acute Respiratory Syndrome) is a communicable virus that is transmitted from a person while speaking, sneezing in the form of respiratory droplets. It spreads by touching an infected surface or by being in contact with an infected person. Healthcare officials from the World Health Organization and local authorities are propelling people to wear face masks as it is one of the comprehensive strategies to overcome the transmission. Amid the advancement of technology, deep learning and computer vision have proved to be an effective way in recognition through image processing. This system is a real-time application to detect people if they are wearing a mask or are without a mask. It has been trained with the dataset that contains around 4000 images using 224x224 as width and height of the image and have achieved an accuracy rate of 98%. In this research, this model has been trained and compiled with 2 CNN for differentiating accuracy to choose the best for this type of model.It can be put into action in public areas such as airports, railways, schools, offices, etc. to check if COVID-19 guidelines are being adhered to or not.


2022 ◽  
Vol 149 ◽  
pp. 106819
Author(s):  
Huazheng Wu ◽  
Xiangfeng Meng ◽  
Xiulun Yang ◽  
Xianye Li ◽  
Yongkai Yin

2020 ◽  
Vol 17 (1) ◽  
pp. 68-73
Author(s):  
M. Hemaanand ◽  
V. Sanjay Kumar ◽  
R. Karthika

With the evolution of technology ensuring people for their safety and security all around the time constantly is a big challenge. We propose an advanced technique based on deep learning and artificial intelligence platform that can monitor the people, their homes and their surroundings providing them a quantifiable increase in security. We have surveillance cameras in our homes for video capture as well as security purposes. Our proposed technique is to detect and classify as well as inform the user if there is any breach in security of the classified object using the cameras by implementing deep learning techniques and the technology of internet of things. It can serve as a perimeter monitoring and intruder alert system in smart surveillance environment. This paper provides a well-defined structure for live stream data analysis. It overcomes the challenge of static closed circuit cameras television as it serves as a motion based tracking system and monitors events in real time to ensure activities are limited to specific persons within authorized areas. It has the advantage of creating multiple bounding boxes to track down the objects which could be any living or non-living thing based on the trained modules. The trespasser or intruder can be efficiently detected using the CCTV camera surveillance which is being supported by the real-time object classifier algorithm at the intermediate module. The proposed method is mainly supported by the real time object detection and classification which is implemented using Mobile Net and Single shot detector.


2021 ◽  
Vol 1 (2) ◽  
pp. 387-413
Author(s):  
Chowdhury Erfan Shourov ◽  
Mahasweta Sarkar ◽  
Arash Jahangiri ◽  
Christopher Paolini

Skateboarding as a method of transportation has become prevalent, which has increased the occurrence and likelihood of pedestrian–skateboarder collisions and near-collision scenarios in shared-use roadway areas. Collisions between pedestrians and skateboarders can result in significant injury. New approaches are needed to evaluate shared-use areas prone to hazardous pedestrian–skateboarder interactions, and perform real-time, in situ (e.g., on-device) predictions of pedestrian–skateboarder collisions as road conditions vary due to changes in land usage and construction. A mechanism called the Surrogate Safety Measures for skateboarder–pedestrian interaction can be computed to evaluate high-risk conditions on roads and sidewalks using deep learning object detection models. In this paper, we present the first ever skateboarder–pedestrian safety study leveraging deep learning architectures. We view and analyze state of the art deep learning architectures, namely the Faster R-CNN and two variants of the Single Shot Multi-box Detector (SSD) model to select the correct model that best suits two different tasks: automated calculation of Post Encroachment Time (PET) and finding hazardous conflict zones in real-time. We also contribute a new annotated data set that contains skateboarder–pedestrian interactions that has been collected for this study. Both our selected models can detect and classify pedestrians and skateboarders correctly and efficiently. However, due to differences in their architectures and based on the advantages and disadvantages of each model, both models were individually used to perform two different set of tasks. Due to improved accuracy, the Faster R-CNN model was used to automate the calculation of post encroachment time, whereas to determine hazardous regions in real-time, due to its extremely fast inference rate, the Single Shot Multibox MobileNet V1 model was used. An outcome of this work is a model that can be deployed on low-cost, small-footprint mobile and IoT devices at traffic intersections with existing cameras to perform on-device inferencing for in situ Surrogate Safety Measurement (SSM), such as Time-To-Collision (TTC) and Post Encroachment Time (PET). SSM values that exceed a hazard threshold can be published to an Message Queuing Telemetry Transport (MQTT) broker, where messages are received by an intersection traffic signal controller for real-time signal adjustment, thus contributing to state-of-the-art vehicle and pedestrian safety at hazard-prone intersections.


Author(s):  
Komala K. V. ◽  
Deepa V. P.

In the advance of the technology and implantation of Internet of Things (IoT), the realization of smart city seems to be very needed. One of the key parts of a cyber-physical system of urban life is transportation. Such mission-critical application has led to inquisitiveness in researchers to develop autonomous robots from academicians and industry. In the domain of autonomous robot, intelligent video analytics is very crucial. By the advent of deep learning many neural ¬¬¬networks-based learning approaches are considered. One of the advanced Single Shot Multibox Detector (SSD) method is exploited for real-time video/image analysis using an IOT device and vehicles/any barrier avoidance on road is done using image processing. The proposed work makes use of SSD algorithm which is responsible for object detection and image processing to control the car, based on its current input. Thus, this work aims to develop real-time barrier detection and barrier avoidance for autonomous robots using a camera and barrier avoidance sensor in an unstructured environment.


Sign in / Sign up

Export Citation Format

Share Document