scholarly journals Automatic Detection of Traffic Accidents from Video Using Deep Learning Techniques

Computers ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 148
Author(s):  
Sergio Robles-Serrano ◽  
German Sanchez-Torres ◽  
John Branch-Bedoya

According to worldwide statistics, traffic accidents are the cause of a high percentage of violent deaths. The time taken to send the medical response to the accident site is largely affected by the human factor and correlates with survival probability. Due to this and the wide use of video surveillance and intelligent traffic systems, an automated traffic accident detection approach becomes desirable for computer vision researchers. Nowadays, Deep Learning (DL)-based approaches have shown high performance in computer vision tasks that involve a complex features relationship. Therefore, this work develops an automated DL-based method capable of detecting traffic accidents on video. The proposed method assumes that traffic accident events are described by visual features occurring through a temporal way. Therefore, a visual features extraction phase, followed by a temporary pattern identification, compose the model architecture. The visual and temporal features are learned in the training phase through convolution and recurrent layers using built-from-scratch and public datasets. An accuracy of 98% is achieved in the detection of accidents in public traffic accident datasets, showing a high capacity in detection independent of the road structure.

2021 ◽  
Vol 5 (1) ◽  
pp. 164
Author(s):  
Ratna Salkiawati ◽  
Allan Desi Alexander ◽  
Hendarman Lubis

Based on the traffic accident report, it was found that there were 41,771 (Forty-one thousand seven hundred and seventy-one) incidents caused by disorderly drivers. (POLRI, 2018). One of these disorders is by driving a motorized vehicle outside the traffic lane. In this study, researchers developed computer vision using sensor methods and image processing. The stages in computer vision are the image acquisition process, the image segmentation process, and the image understanding process. This study aims to develop an application using computer vision to warn drivers of disorderly traffic or to increase the alertness of motorized vehicle drivers by detecting the condition of the driver's path. It is hoped that this research will provide a sense of security for motorized vehicle drivers, as well as provide applications that are expected to increase driver awareness to avoid traffic accidents


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1742
Author(s):  
Edoardo Vantaggiato ◽  
Emanuela Paladini ◽  
Fares Bougourzi ◽  
Cosimo Distante ◽  
Abdenour Hadid ◽  
...  

The recognition of COVID-19 infection from X-ray images is an emerging field in the learning and computer vision community. Despite the great efforts that have been made in this field since the appearance of COVID-19 (2019), the field still suffers from two drawbacks. First, the number of available X-ray scans labeled as COVID-19-infected is relatively small. Second, all the works that have been carried out in the field are separate; there are no unified data, classes, and evaluation protocols. In this work, based on public and newly collected data, we propose two X-ray COVID-19 databases, which are three-class COVID-19 and five-class COVID-19 datasets. For both databases, we evaluate different deep learning architectures. Moreover, we propose an Ensemble-CNNs approach which outperforms the deep learning architectures and shows promising results in both databases. In other words, our proposed Ensemble-CNNs achieved a high performance in the recognition of COVID-19 infection, resulting in accuracies of 100% and 98.1% in the three-class and five-class scenarios, respectively. In addition, our approach achieved promising results in the overall recognition accuracy of 75.23% and 81.0% for the three-class and five-class scenarios, respectively. We make our databases of COVID-19 X-ray scans publicly available to encourage other researchers to use it as a benchmark for their studies and comparisons.


Vision plays an important part which helps us to look at the world and perceive information about our surroundings. A human perceives information by looking at an object or the surrounding on the whole and tries to map visual features and attributes and by summarizing these features we can describe or tell about our surroundings. The way the human brain does this is still a huge mystery. But, For a machine/computer this task is what is called as Image Captioning. The computer or machine is fed with images from which they learn to extract features i.e pixel information, object position, geometry, etc. Using these features the machine tries to map it to a sentence word by word or on a whole which summarizes the information of the image. Due to the advancements in recent Computer Vision Methods and Deep Learning architectures, Computers have been able to correctly summarize images which have been fed to them. In this paper, we present a survey on the new types of architectures and the datasets which are being used to train such architectures. Furthermore, we have discussed future methods that can be implemented.


Author(s):  
Ramaprasad Poojary ◽  
Roma Raina ◽  
Amit Kumar Mondal

<span id="docs-internal-guid-cdb76bbb-7fff-978d-961c-e21c41807064"><span>During the last few years, deep learning achieved remarkable results in the field of machine learning when used for computer vision tasks. Among many of its architectures, deep neural network-based architecture known as convolutional neural networks are recently used widely for image detection and classification. Although it is a great tool for computer vision tasks, it demands a large amount of training data to yield high performance. In this paper, the data augmentation method is proposed to overcome the challenges faced due to a lack of insufficient training data. To analyze the effect of data augmentation, the proposed method uses two convolutional neural network architectures. To minimize the training time without compromising accuracy, models are built by fine-tuning pre-trained networks VGG16 and ResNet50. To evaluate the performance of the models, loss functions and accuracies are used. Proposed models are constructed using Keras deep learning framework and models are trained on a custom dataset created from Kaggle CAT vs DOG database. Experimental results showed that both the models achieved better test accuracy when data augmentation is employed, and model constructed using ResNet50 outperformed VGG16 based model with a test accuracy of 90% with data augmentation &amp; 82% without data augmentation.</span></span>


Author(s):  
Jia Lu ◽  
Wei Qi Yan

With the cost decrease of security monitoring facilities such as cameras, video surveillance has been widely applied to public security and safety such as banks, transportation, shopping malls, etc. which allows police to monitor abnormal events. Through deep learning, authors can achieve high performance of human behavior detection and recognition by using model training and tests. This chapter uses public datasets Weizmann dataset and KTH dataset to train deep learning models. Four deep learning models were investigated for human behavior recognition. Results show that YOLOv3 model is the best one and achieved 96.29% of mAP based on Weizmann dataset and 84.58% of mAP on KTH dataset. The chapter conducts human behavior recognition using deep learning and evaluates the outcomes of different approaches with the support of the datasets.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 589 ◽  
Author(s):  
Luis Barba-Guaman ◽  
José Eugenio Naranjo ◽  
Anthony Ortiz

Object detection, one of the most fundamental and challenging problems in computer vision. Nowadays some dedicated embedded systems have emerged as a powerful strategy for deliver high processing capabilities including the NVIDIA Jetson family. The aim of the present work is the recognition of objects in complex rural areas through an embedded system, as well as the verification of accuracy and processing time. For this purpose, a low power embedded Graphics Processing Unit (Jetson Nano) has been selected, which allows multiple neural networks to be run in simultaneous and a computer vision algorithm to be applied for image recognition. As well, the performance of these deep learning neural networks such as ssd-mobilenet v1 and v2, pednet, multiped and ssd-inception v2 has been tested. Moreover, it was found that the accuracy and processing time were in some cases improved when all the models suggested in the research were applied. The pednet network model provides a high performance in pedestrian recognition, however, the sdd-mobilenet v2 and ssd-inception v2 models are better at detecting other objects such as vehicles in complex scenarios.


2018 ◽  
Vol 9 (08) ◽  
pp. 20531-20536
Author(s):  
Nusrat Shamima Nur ◽  
M. S. l. Mullick ◽  
Ahmed Hossain

Background: In Bangladesh fatality rate due to road traffic accidents is rising sharply day by day. At least 2297 people were killed and 5480 were injured in road traffic accidents within 1st six months of 2017.Whereas in the previous year at 2016 at least 1941 people were killed and 4794 were injured within the 1st six months. No survey has been reported in Bangladesh yet correlating ADHD as a reason of impulsive driving which ends up in a road crash.


2008 ◽  
Vol 59 (7) ◽  
Author(s):  
Corina Samoila ◽  
Alfa Xenia Lupea ◽  
Andrei Anghel ◽  
Marilena Motoc ◽  
Gabriela Otiman ◽  
...  

Denaturing High Performance Liquid Chromatography (DHPLC) is a relatively new method used for screening DNA sequences, characterized by high capacity to detect mutations/polymorphisms. This study is focused on the Transgenomic WAVETM DNA Fragment Analysis (based on DHPLC separation method) of a 485 bp fragment from human EC-SOD gene promoter in order to detect single nucleotide polymorphism (SNPs) associated with atherosclerosis and risk factors of cardiovascular disease. The fragment of interest was amplified by PCR reaction and analyzed by DHPLC in 100 healthy subjects and 70 patients characterized by atheroma. No different melting profiles were detected for the analyzed DNA samples. A combination of computational methods was used to predict putative transcription factors in the fragment of interest. Several putative transcription factors binding sites from the Ets-1 oncogene family: ETS member Elk-1, polyomavirus enhancer activator-3 (PEA3), protein C-Ets-1 (Ets-1), GABP: GA binding protein (GABP), Spi-1 and Spi-B/PU.1 related transcription factors, from the Krueppel-like family: Gut-enriched Krueppel-like factor (GKLF), Erythroid Krueppel-like factor (EKLF), Basic Krueppel-like factor (BKLF), GC box and myeloid zinc finger protein MZF-1 were identified in the evolutionary conserved regions. The bioinformatics results need to be investigated further in others studies by experimental approaches.


Sign in / Sign up

Export Citation Format

Share Document