scholarly journals YOLO with adaptive frame control for real-time object detection applications

Author(s):  
Jeonghun Lee ◽  
Kwang-il Hwang

AbstractYou only look once (YOLO) is being used as the most popular object detection software in many intelligent video applications due to its ease of use and high object detection precision. In addition, in recent years, various intelligent vision systems based on high-performance embedded systems are being developed. Nevertheless, the YOLO still requires high-end hardware for successful real-time object detection. In this paper, we first discuss real-time object detection service of the YOLO on AI embedded systems with resource constraints. In particular, we point out the problems related to real-time processing in YOLO object detection associated with network cameras, and then propose a novel YOLO architecture with adaptive frame control (AFC) that can efficiently cope with these problems. Through various experiments, we show that the proposed AFC can maintain the high precision and convenience of YOLO, and provide real-time object detection service by minimizing total service delay, which remains a limitation of the pure YOLO.

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3591 ◽  
Author(s):  
Haidi Zhu ◽  
Haoran Wei ◽  
Baoqing Li ◽  
Xiaobing Yuan ◽  
Nasser Kehtarnavaz

This paper addresses real-time moving object detection with high accuracy in high-resolution video frames. A previously developed framework for moving object detection is modified to enable real-time processing of high-resolution images. First, a computationally efficient method is employed, which detects moving regions on a resized image while maintaining moving regions on the original image with mapping coordinates. Second, a light backbone deep neural network in place of a more complex one is utilized. Third, the focal loss function is employed to alleviate the imbalance between positive and negative samples. The results of the extensive experimentations conducted indicate that the modified framework developed in this paper achieves a processing rate of 21 frames per second with 86.15% accuracy on the dataset SimitMovingDataset, which contains high-resolution images of the size 1920 × 1080.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3217 ◽  
Author(s):  
Jaechan Cho ◽  
Yongchul Jung ◽  
Dong-Sun Kim ◽  
Seongjoo Lee ◽  
Yunho Jung

Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz.


2019 ◽  
Author(s):  
Timothy R Brick ◽  
James Mundie ◽  
Jonathan Weaver ◽  
Robert Fraleigh ◽  
Zita Oravecz

BACKGROUND Mobile health (mHealth) methods often rely on active input from participants, for example, in the form of self-report questionnaires delivered via web or smartphone, to measure health and behavioral indicators and deliver interventions in everyday life settings. For short-term studies or interventions, these techniques are deployed intensively, causing nontrivial participant burden. For cases where the goal is long-term maintenance, limited infrastructure exists to balance information needs with participant constraints. Yet, the increasing precision of passive sensors such as wearable physiology monitors, smartphone-based location history, and internet-of-things devices, in combination with statistical feature selection and adaptive interventions, have begun to make such things possible. OBJECTIVE In this paper, we introduced <i>Wear-IT</i>, a smartphone app and cloud framework intended to begin addressing current limitations by allowing researchers to leverage commodity electronics and real-time decision making to optimize the amount of useful data collected while minimizing participant burden. METHODS The <i>Wear-IT</i> framework uses real-time decision making to find more optimal tradeoffs between the utility of data collected and the burden placed on participants. <i>Wear-IT</i> integrates a variety of consumer-grade sensors and provides adaptive, personalized, and low-burden monitoring and intervention. Proof of concept examples are illustrated using artificial data. The results of qualitative interviews with users are provided. RESULTS Participants provided positive feedback about the ease of use of studies conducted using the <i>Wear-IT</i> framework. Users expressed positivity about their overall experience with the framework and its utility for balancing burden and excitement about future studies that real-time processing will enable. CONCLUSIONS The <i>Wear-IT</i> framework uses a combination of passive monitoring, real-time processing, and adaptive assessment and intervention to provide a balance between high-quality data collection and low participant burden. The framework presents an opportunity to deploy adaptive assessment and intervention designs that use real-time processing and provides a platform to study and overcome the challenges of long-term mHealth intervention.


2013 ◽  
Vol 774-776 ◽  
pp. 1481-1484
Author(s):  
Yan Lian Zhang

To satisfy the special requirement of Real-time Processing for 1553B-Bus in large-scale ground experimentation of airborne weapon, the design and realization of 1553B-Bus communication board based on USB-interface and Real-time Processing soft for 1553B-Bus are proposed. The high performance fixed point MSC1210Y5 and the Advanced Communication Engine (ACE) BU-61580 are all used in the design of hardware. The function of logical control is implemented by FPGA. The system achieves data acquisition and real-time processing of 1553B-Bus in ground experimentation; the experiment shows that the system achieves the desired design requirement of experiment testing.


Author(s):  
Lucas da Silva Medeiros ◽  
Ricardo Emerson Julio ◽  
Rodrigo Maximiano Antunes de Almeida ◽  
Guilherme Sousa Bastos

10.2196/16072 ◽  
2020 ◽  
Vol 4 (6) ◽  
pp. e16072 ◽  
Author(s):  
Timothy R Brick ◽  
James Mundie ◽  
Jonathan Weaver ◽  
Robert Fraleigh ◽  
Zita Oravecz

Background Mobile health (mHealth) methods often rely on active input from participants, for example, in the form of self-report questionnaires delivered via web or smartphone, to measure health and behavioral indicators and deliver interventions in everyday life settings. For short-term studies or interventions, these techniques are deployed intensively, causing nontrivial participant burden. For cases where the goal is long-term maintenance, limited infrastructure exists to balance information needs with participant constraints. Yet, the increasing precision of passive sensors such as wearable physiology monitors, smartphone-based location history, and internet-of-things devices, in combination with statistical feature selection and adaptive interventions, have begun to make such things possible. Objective In this paper, we introduced Wear-IT, a smartphone app and cloud framework intended to begin addressing current limitations by allowing researchers to leverage commodity electronics and real-time decision making to optimize the amount of useful data collected while minimizing participant burden. Methods The Wear-IT framework uses real-time decision making to find more optimal tradeoffs between the utility of data collected and the burden placed on participants. Wear-IT integrates a variety of consumer-grade sensors and provides adaptive, personalized, and low-burden monitoring and intervention. Proof of concept examples are illustrated using artificial data. The results of qualitative interviews with users are provided. Results Participants provided positive feedback about the ease of use of studies conducted using the Wear-IT framework. Users expressed positivity about their overall experience with the framework and its utility for balancing burden and excitement about future studies that real-time processing will enable. Conclusions The Wear-IT framework uses a combination of passive monitoring, real-time processing, and adaptive assessment and intervention to provide a balance between high-quality data collection and low participant burden. The framework presents an opportunity to deploy adaptive assessment and intervention designs that use real-time processing and provides a platform to study and overcome the challenges of long-term mHealth intervention.


Sign in / Sign up

Export Citation Format

Share Document