Light-Weight Siamese Attention Network Object Tracking for Unmanned Aerial Vehicle

2020 ◽  
Vol 40 (19) ◽  
pp. 1915001
Author(s):  
崔洲涓 Cui Zhoujuan ◽  
安军社 An Junshe ◽  
张羽丰 Zhang Yufeng ◽  
崔天舒 Cui Tianshu
2020 ◽  
Vol 12 (16) ◽  
pp. 2646
Author(s):  
Shiyu Zhang ◽  
Li Zhuo ◽  
Hui Zhang ◽  
Jiafeng Li

Visual object tracking in unmanned aerial vehicle (UAV) videos plays an important role in a variety of fields, such as traffic data collection, traffic monitoring, as well as film and television shooting. However, it is still challenging to track the target robustly in UAV vision task due to several factors such as appearance variation, background clutter, and severe occlusion. In this paper, we propose a novel two-stage UAV tracking framework, which includes a target detection stage based on multifeature discrimination and a bounding-box estimation stage based on the instance-aware attention network. In the target detection stage, we explore a feature representation scheme for a small target that integrates handcrafted features, low-level deep features, and high-level deep features. Then, the correlation filter is used to roughly predict target location. In the bounding-box estimation stage, an instance-aware intersection over union (IoU)-Net is integrated together with an instance-aware attention network to estimate the target size based on the bounding-box proposals generated in the target detection stage. Extensive experimental results on the UAV123 and UAVDT datasets show that our tracker, running at over 25 frames per second (FPS), has superior performance as compared with state-of-the-art UAV visual tracking approaches.


2015 ◽  
Vol 781 ◽  
pp. 491-494
Author(s):  
Channa Meng ◽  
John Morris ◽  
Chattraku Sombattheera

We use multiple tracking agents in parallel for autonomously tracking an arbitrary target from an unmanned aerial vehicle. An object initially selected by a user from a possibly cluttered scene containing other static and moving objects and occlusions - both partial and complete - is tracked as long as it remains in view using a single light-weight camera readily installed in a UAV. We assumed, for the present, at least, that the UAV sends images to a ground station which controls it. We evaluated several individual tracking agents in terms of tracking success and their times for processing frames streamed from the UAV to the ground station at 25 fps, so that the system shoud compute results in 40ms. Histogram trackers were most successful at $\sim 10$ ms per frame which can be further optimized.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 795
Author(s):  
Happiness Ugochi Dike ◽  
Yimin Zhou

Multiple object tracking (MOT) from unmanned aerial vehicle (UAV) videos has faced several challenges such as motion capture and appearance, clustering, object variation, high altitudes, and abrupt motion. Consequently, the volume of objects captured by the UAV is usually quite small, and the target object appearance information is not always reliable. To solve these issues, a new technique is presented to track objects based on a deep learning technique that attains state-of-the-art performance on standard datasets, such as Stanford Drone and Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking (UAVDT) datasets. The proposed faster RCNN (region-based convolutional neural network) framework was enhanced by integrating a series of activities, including the proper calibration of key parameters, multi-scale training, hard negative mining, and feature collection to improve the region-based CNN baseline. Furthermore, a deep quadruplet network (DQN) was applied to track the movement of the captured objects from the crowded environment, and it was modelled to utilize new quadruplet loss function in order to study the feature space. A deep 6 Rectified linear units (ReLU) convolution was used in the faster RCNN to mine spatial–spectral features. The experimental results on the standard datasets demonstrated a high performance accuracy. Thus, the proposed method can be used to detect multiple objects and track their trajectories with a high accuracy.


2020 ◽  
Vol 1528 ◽  
pp. 012018
Author(s):  
Ahmad Haris Indra Fadhillah ◽  
Ahsanu Taqwim Safrudin ◽  
Surya Darma

2018 ◽  
Vol 15 (1) ◽  
pp. 172988141875910 ◽  
Author(s):  
Sheng Liu ◽  
Yuan Feng

Object tracking for unmanned aerial vehicle applications in outdoor scenes is a very complex problem. In videos captured by unmanned aerial vehicle, due to frequent variation in illumination, motion blur, image noise, deformation, lack of image texture, occlusion, fast motion, and other degradations, most tracking methods will lead to failure. The article focuses on the object tracking in severely degraded videos. To deal with those various degradations, a real-time object tracking method for high dynamic background is developed. By integrating histogram of oriented gradient, RGB histogram and motion histogram into a novel statistical model, our method can robustly track the target in unmanned aerial vehicle captured videos. Compared to those existing methods, our proposed approach costs less resource in the tracking, significantly increases the tracking speed, and runs faster than state-of-the-art methods. Also, our approach achieved satisfactory tracking results on the challenging visual tracking benchmark, object tracking benchmark 2013, the supplementary experiments demonstrates that our method is more effective and accurate than other methods.


2021 ◽  
Vol 40 (9) ◽  
pp. 1550-1569
Author(s):  
Qinghua GUO ◽  
Tianyu HU ◽  
Jin LIU ◽  
Shichao JIN ◽  
Qing XIAO ◽  
...  

2020 ◽  
Vol 12 (2) ◽  
Author(s):  
Daniel R. McArthur ◽  
Arindam B. Chowdhury ◽  
David J. Cappelleri

Abstract This paper presents the design of a light-weight, compliant end-effector and an image processing strategy that together enable the Interacting-BoomCopter (I-BC) unmanned aerial vehicle (UAV) to perform an autonomous door-opening task. Autonomy is achieved through the use of feedback from an onboard camera for door detection & localization, and embedded force and distance sensors in the end-effector for detecting the physical interaction with the door. The results of several experimental flight tests are presented in which the end-effector and image processing strategy were deployed on the I-BC to successfully open a small enclosure door autonomously.


Author(s):  
Daniel R. McArthur ◽  
Arindam B. Chowdhury ◽  
David J. Cappelleri

Abstract This paper presents the design of a light-weight, compliant end-effector and an image processing strategy that together enable the Interacting-BoomCopter (I-BC) unmanned aerial vehicle (UAV) to perform an autonomous door opening task. Autonomy is achieved through the use of feedback from an on-board camera for door detection & localization, and embedded force and distance sensors in the end-effector for detecting the physical interaction with the door. The results of several experimental flight tests are presented in which the end-effector and image processing strategy were deployed on the I-BC to successfully open a small enclosure door autonomously.


Sign in / Sign up

Export Citation Format

Share Document