Network virtualization for real-time processing of object detection using deep learning

Author(s):  
Dae-Young Kim ◽  
Ji-Hoon Park ◽  
Youngchan Lee ◽  
Seokhoon Kim
Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3591 ◽  
Author(s):  
Haidi Zhu ◽  
Haoran Wei ◽  
Baoqing Li ◽  
Xiaobing Yuan ◽  
Nasser Kehtarnavaz

This paper addresses real-time moving object detection with high accuracy in high-resolution video frames. A previously developed framework for moving object detection is modified to enable real-time processing of high-resolution images. First, a computationally efficient method is employed, which detects moving regions on a resized image while maintaining moving regions on the original image with mapping coordinates. Second, a light backbone deep neural network in place of a more complex one is utilized. Third, the focal loss function is employed to alleviate the imbalance between positive and negative samples. The results of the extensive experimentations conducted indicate that the modified framework developed in this paper achieves a processing rate of 21 frames per second with 86.15% accuracy on the dataset SimitMovingDataset, which contains high-resolution images of the size 1920 × 1080.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3217 ◽  
Author(s):  
Jaechan Cho ◽  
Yongchul Jung ◽  
Dong-Sun Kim ◽  
Seongjoo Lee ◽  
Yunho Jung

Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8381
Author(s):  
Duarte Fernandes ◽  
Tiago Afonso ◽  
Pedro Girão ◽  
Dibet Gonzalez ◽  
António Silva ◽  
...  

Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.


Author(s):  
Jeonghun Lee ◽  
Kwang-il Hwang

AbstractYou only look once (YOLO) is being used as the most popular object detection software in many intelligent video applications due to its ease of use and high object detection precision. In addition, in recent years, various intelligent vision systems based on high-performance embedded systems are being developed. Nevertheless, the YOLO still requires high-end hardware for successful real-time object detection. In this paper, we first discuss real-time object detection service of the YOLO on AI embedded systems with resource constraints. In particular, we point out the problems related to real-time processing in YOLO object detection associated with network cameras, and then propose a novel YOLO architecture with adaptive frame control (AFC) that can efficiently cope with these problems. Through various experiments, we show that the proposed AFC can maintain the high precision and convenience of YOLO, and provide real-time object detection service by minimizing total service delay, which remains a limitation of the pure YOLO.


Due to advancement in technology, availability of resources and by increased utilization of on node sensors enormous amount of data is obtained. There is a necessity of analyzing and classifying this physiological information by efficient and effective approaches such as deep learning and artificial intelligence. Human Activity Recognition (HAR) is assuming a dominant role in sports, security, anti-crime, healthcare and also in environmental applications like wildlife observation etc. Most techniques work well for processing offline instead of real- time processing. There are few approaches which provide maximum accuracy for real time processing of large-scale data, one of the compromising approaches is deep learning. Limitation of resources is one of the causes to restrict the usage of deep learning for low power devices which can be worn on our body. Deep learning implementations are known to produce precise results for different computing systems.We suggest a deep learning approach in this paper which integrates features and data learned from inertial sensors with complementary knowledge obtained from a collection of shallow features which generates the possibility of performing real time activity classification accurately. Eliminating the obstructions caused by using deep learning methods for real-time analysis is the aim of this integrated design. Before passing the data into the deep learning framework, we perform spectral analysis to optimize the planned methodology for on-node computation. The accuracy obtained by combined approach is tested by utilizing datasets obtained from laboratory and real world controlled and uncontrolled environment. Our outcomes demonstrate the legitimacy of the methodology on various human action datasets, beating different techniques, including the two strategies utilized inside our consolidated pipeline. We additionally exhibit that our integrated design's classification times are reliable with on node real-time analysis criteria on smart phones and wearable technology.


Author(s):  
Daiki Matsumoto ◽  
Ryuji Hirayama ◽  
Naoto Hoshikawa ◽  
Hirotaka Nakayama ◽  
Tomoyoshi Shimobaba ◽  
...  

Author(s):  
David J. Lobina

The study of cognitive phenomena is best approached in an orderly manner. It must begin with an analysis of the function in intension at the heart of any cognitive domain (its knowledge base), then proceed to the manner in which such knowledge is put into use in real-time processing, concluding with a domain’s neural underpinnings, its development in ontogeny, etc. Such an approach to the study of cognition involves the adoption of different levels of explanation/description, as prescribed by David Marr and many others, each level requiring its own methodology and supplying its own data to be accounted for. The study of recursion in cognition is badly in need of a systematic and well-ordered approach, and this chapter lays out the blueprint to be followed in the book by focusing on a strict separation between how this notion applies in linguistic knowledge and how it manifests itself in language processing.


Sign in / Sign up

Export Citation Format

Share Document