scholarly journals Object Detection and Tracking using YOLO v3 Framework for Increased Resolution Video

The proposed system is used for vehicle detection and tracking from the high-resolution video. It detects the object (vehicles) and recognizes the object comparing its features with the features of the objects stored in the database. If the features match, then object is tracked. There are two steps of implementation, online and offline process. In offline process the data in the form of images are given to feature extractor and then after to the trained YOLO v3 model and weight files is generated form the pre-trained YOLO v3 model. In online phase, real-time video is applied to feature extractor to extract the features and then applied to the pre-trained YOLO v3 model. The other reference to YOLO v3 model pre-trained is the output of weight file. The YOLO v3 model process on the video frame and weight file extracted features, the model output is classified image. In YOLO v3 Darknet-53 is used along with Keras, some libraries with OpenCV, Tensor Flow, and Numpy. The proposed system is implemented on PC Intel Pentium G500, 8GB and operating system Windows 7 is used for processing our system. The system is tested on PASCAL VOC dataset and the results obtained are accuracy 80%, precision 80%, recall 100%, F1-Score 88%, mAP 76.7%, and 0.018%. The system is implemented using python 3.6.0 software and also tested using real-time video having 1280x720 and 1920x1080 resolutions. The execution time for one frame of video having resolution of 1280x720 (HD) and 1920x1080 (FHD) and 1280x720 (HD) are 1.840 second and 4.414808 seconds respectively with accuracy is 80%.

Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3322
Author(s):  
Sara Alonso ◽  
Jesús Lázaro ◽  
Jaime Jiménez ◽  
Unai Bidarte ◽  
Leire Muguira

Smart grid endpoints need to use two environments within a processing system (PS), one with a Linux-type operating system (OS) using the Arm Cortex-A53 cores for management tasks, and the other with a standalone execution or a real-time OS using the Arm Cortex-R5 cores. The Xen hypervisor and the OpenAMP framework allow this, but they may introduce a delay in the system, and some messages in the smart grid need a latency lower than 3 ms. In this paper, the Linux thread latencies are characterized by the Cyclictest tool. It is shown that when Xen hypervisor is used, this scenario is not suitable for the smart grid as it does not meet the 3 ms timing constraint. Then, standalone execution as the real-time part is evaluated, measuring the delay to handle an interrupt created in programmable logic (PL). The standalone application was run in A53 and R5 cores, with Xen hypervisor and OpenAMP framework. These scenarios all met the 3 ms constraint. The main contribution of the present work is the detailed characterization of each real-time execution, in order to facilitate selecting the most suitable one for each application.


Author(s):  
Zhenyao Zhang ◽  
Jianying Zheng ◽  
Hao Xu ◽  
Xiang Wang

The problem of traffic safety has become increasingly prominent owing to the increase in the number of cars. Traffic accidents often occur in an instant, which makes it necessary to obtain traffic data with high resolution. High-resolution micro traffic data (HRMTD) indicates that the spatial resolution reaches the centimeter level and that the temporal resolution reaches the millisecond level. The position, direction, speed, and acceleration of objects on the road can be extracted with HRMTD. In this paper, a LiDAR sensor was installed at the roadside for data collection. An adjacent-frame fusion method for vehicle detection and tracking in complex traffic circumstances is presented. Compared with the previous research, objects can be detected and tracked without object model extraction or a bounding box description. In addition, problems caused by occlusion can be improved using adjacent frames fusion in the vehicle detection and tracking algorithms in this paper. The data processing procedure are as follows: selection of area of interest, ground point removal, vehicle clustering, and vehicle tracking. The algorithm has been tested at different sites (in Reno and Suzhou), and the results demonstrate that the algorithm can perform well in both simple and complex application scenarios.


2020 ◽  
Vol 39 (3) ◽  
pp. 2693-2710 ◽  
Author(s):  
Wael Farag

In this paper, an advanced-and-reliable vehicle detection-and-tracking technique is proposed and implemented. The Real-Time Vehicle Detection-and-Tracking (RT_VDT) technique is well suited for Advanced Driving Assistance Systems (ADAS) applications or Self-Driving Cars (SDC). The RT_VDT is mainly a pipeline of reliable computer vision and machine learning algorithms that augment each other and take in raw RGB images to produce the required boundary boxes of the vehicles that appear in the front driving space of the car. The main contribution of this paper is the careful fusion of the employed algorithms where some of them work in parallel to strengthen each other in order to produce a precise and sophisticated real-time output. In addition, the RT_VDT provides fast enough computation to be embedded in CPUs that are currently employed by ADAS systems. The particulars of the employed algorithms together with their implementation are described in detail. Additionally, these algorithms and their various integration combinations are tested and their performance is evaluated using actual road images, and videos captured by the front-mounted camera of the car as well as on the KITTI benchmark with 87% average precision achieved. The evaluation of the RT_VDT shows that it reliably detects and tracks vehicle boundaries under various conditions.


Detection and monitoring of real-time road signs are becoming today's study in the autonomous car industry. The number of car users in Malaysia risen every year as well as the rate of car crashes. Different types, shapes, and colour of road signs lead the driver to neglect them, and this attitude contributing to a high rate of accidents. The purpose of this paper is to implement image processing using the real-time video Road Sign Detection and Tracking (RSDT) with an autonomous car. The detection of road signs is carried out by using Video and Image Processing technique control in Python by applying deep learning process to detect an object in a video’s motion. The extracted features from the video frame will continue to template matching on recognition processes which are based on the database. The experiment for the fixed distance shows an accuracy of 99.9943% while the experiment with the various distance showed the inversely proportional relation between distances and accuracies. This system was also able to detect and recognize five types of road signs using a convolutional neural network. Lastly, the experimental results proved the system capability to detect and recognize the road sign accurately.


Sign in / Sign up

Export Citation Format

Share Document