scholarly journals A Lightweight Object Detection Network for Real-Time Detection of Driver Handheld Call on Embedded Devices

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Zuopeng Zhao ◽  
Zhongxin Zhang ◽  
Xinzheng Xu ◽  
Yi Xu ◽  
Hualin Yan ◽  
...  

It is necessary to improve the performance of the object detection algorithm in resource-constrained embedded devices by lightweight improvement. In order to further improve the recognition accuracy of the algorithm for small target objects, this paper integrates 5 × 5 deep detachable convolution kernel on the basis of MobileNetV2-SSDLite model, extracts features of two special convolutional layers in addition to detecting the target, and designs a new lightweight object detection network—Lightweight Microscopic Detection Network (LMS-DN). The network can be implemented on embedded devices such as NVIDIA Jetson TX2. The experimental results show that LMS-DN only needs fewer parameters and calculation costs to obtain higher identification accuracy and stronger anti-interference than other popular object detection models.

2013 ◽  
Vol 373-375 ◽  
pp. 483-486
Author(s):  
Chen Zhang ◽  
Xu Qian

Object detection is the important foundation of visual tracking. In this paper, a real-time object detection algorithm based on back-projection was presented. Firstly, according to the principle of back-projection, the objects probability image is calculated by objects color histogram model, and then we determine the object on the basis of some contour strategy in that image. Experimental results show that the proposed algorithm accurately detected the position of object in real-time if the contour of object change within a certain range and the color of object is distinct.


2020 ◽  
Vol 57 (20) ◽  
pp. 201009
Author(s):  
奚琦 Xi Qi ◽  
张正道 Zhang Zhengdao ◽  
彭力 Peng Li

2021 ◽  
Vol 8 (2) ◽  
pp. 3-7
Author(s):  
Julkar Nine ◽  
Naeem Ahmed ◽  
Rahul Mathavan

the sleeping driver is potentially more likely to cause an accident than the person who speeds up since the driver is the victim of sleepiness. Automobile industry researchers, including manufacturers, seek to solve this issue with various technical solutions that can avoid such a situation. This paper proposes an implementation of a lightweight method to detect driver's sleepiness using facial landmarks and head pose estimation based on neural network methodologies on a mobile device. We try to improve the accurateness by using face images that the camera detects and passes to CNN to identify sleepiness. Firstly, applied a behavioral landmark's sleepiness detection process. Then, an integrated Head Pose Estimation technique will strengthen the system's reliability. The preliminary findings of the tests demonstrate that with real-time capability, more than 86% identification accuracy can be reached in several real-world scenarios for all classes, including with glasses, without glasses, and light-dark background. This work aims to classify drowsiness, warn, and inform drivers, helping them to stop falling asleep at the wheel. The integrated CNN-based method is used to create a high accuracy and simple-to-use real-time driver drowsiness monitoring framework for embedded devices and Android phones


2019 ◽  
Vol 11 (7) ◽  
pp. 786 ◽  
Author(s):  
Yang-Lang Chang ◽  
Amare Anagaw ◽  
Lena Chang ◽  
Yi Wang ◽  
Chih-Yu Hsiao ◽  
...  

Synthetic aperture radar (SAR) imagery has been used as a promising data source for monitoring maritime activities, and its application for oil and ship detection has been the focus of many previous research studies. Many object detection methods ranging from traditional to deep learning approaches have been proposed. However, majority of them are computationally intensive and have accuracy problems. The huge volume of the remote sensing data also brings a challenge for real time object detection. To mitigate this problem a high performance computing (HPC) method has been proposed to accelerate SAR imagery analysis, utilizing the GPU based computing methods. In this paper, we propose an enhanced GPU based deep learning method to detect ship from the SAR images. The You Only Look Once version 2 (YOLOv2) deep learning framework is proposed to model the architecture and training the model. YOLOv2 is a state-of-the-art real-time object detection system, which outperforms Faster Region-Based Convolutional Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) methods. Additionally, in order to reduce computational time with relatively competitive detection accuracy, we develop a new architecture with less number of layers called YOLOv2-reduced. In the experiment, we use two types of datasets: A SAR ship detection dataset (SSDD) dataset and a Diversified SAR Ship Detection Dataset (DSSDD). These two datasets were used for training and testing purposes. YOLOv2 test results showed an increase in accuracy of ship detection as well as a noticeable reduction in computational time compared to Faster R-CNN. From the experimental results, the proposed YOLOv2 architecture achieves an accuracy of 90.05% and 89.13% on the SSDD and DSSDD datasets respectively. The proposed YOLOv2-reduced architecture has a similarly competent detection performance as YOLOv2, but with less computational time on a NVIDIA TITAN X GPU. The experimental results shows that the deep learning can make a big leap forward in improving the performance of SAR image ship detection.


2019 ◽  
Vol 77 ◽  
pp. 398-408 ◽  
Author(s):  
Shengyu Lu ◽  
Beizhan Wang ◽  
Hongji Wang ◽  
Lihao Chen ◽  
Ma Linjian ◽  
...  

Author(s):  
Garv Modwel ◽  
Anu Mehra ◽  
Nitin Rakesh ◽  
K K Mishra

Background: Object detection algorithm scans every frame in the video to detect the objects present which is time consuming. This process becomes undesirable while dealing with real time system, which needs to act with in a predefined time constraint. To have quick response we need reliable detection and recognition for objects. Methods: To deal with the above problem a hybrid method is being implemented. This hybrid method combines three important algorithms to reduce scanning task for every frame. Recursive Density Estimation (RDE) algorithm decides which frame need to be scanned. You Look at Once (YOLO) algorithm does the detection and recognition in the selected frame. Detected objects are being tracked through Speed-up Robust Feature (SURF) algorithm to track the objects in subsequent frames. Results: Through the experimental study, we demonstrate that hybrid algorithm is more efficient compared to two different algorithm of same level. The algorithm is having high accuracy and low time latency (which is necessary for real time processing). Conclusion: The hybrid algorithm is able to detect with a minimum accuracy of 97 percent for all the conducted experiments and time lag experienced is also negligible, which makes it considerably efficient for real time application.


Sign in / Sign up

Export Citation Format

Share Document