scholarly journals Technique for automated target object search in video stream from UAV in post-processing mode

Author(s):  
Пилип Олександрович Приставка ◽  
Дмитро Ігорович Гісь ◽  
Артем Валерійович Чирков
2018 ◽  
Vol 71 (6) ◽  
pp. 1457-1468
Author(s):  
Péter Pongrácz ◽  
András Péter ◽  
Ádám Miklósi

A central problem of behavioural studies providing artificial visual stimuli for non-human animals is to determine how subjects perceive and process these stimuli. Especially in the case of videos, it is important to ascertain that animals perceive the actual content of the images and are not just reacting to the motion cues in the presentation. In this study, we set out to investigate how dogs process life-sized videos. We aimed to find out whether dogs perceive the actual content of video images or whether they only react to the videos as a set of dynamic visual elements. For this purpose, dogs were presented with an object search task where a life-sized projected human was hiding a target object. The videos were either normally oriented or displayed upside down, and we analysed dogs’ reactions towards the projector screen after the video presentations, and their performance in the search task. Results indicated that in the case of the normally oriented videos, dogs spontaneously perceived the actual content of the images. However, the ‘Inverted’ videos were first processed as a set of unrelated visual elements, and only after some exposure to these videos did the dogs show signs of perceiving the unusual configuration of the depicted scene. Our most important conclusion was that dogs process the same type of artificial visual stimuli in different ways, depending on the familiarity of the depicted scene, and that the processing mode can change with exposure to unfamiliar stimuli.


2014 ◽  
Vol 602-605 ◽  
pp. 1689-1692
Author(s):  
Cong Lin ◽  
Chi Man Pun

A novel visual object tracking method for color video stream based on traditional particle filter is proposed in this paper. Feature vectors are extracted from coefficient matrices of fast three-dimensional Discrete Cosine Transform (fast 3-D DCT). The feature, as experiment showed, is very robust to occlusion and rotation and it is not sensitive to scale changes. The proposed method is efficient enough to be used in a real-time application. The experiment was carried out on some common used datasets in literature. The results are satisfied and showed the estimated trace follows the target object very closely.


2018 ◽  
Vol 43 (1) ◽  
pp. 123-152 ◽  
Author(s):  
Mohsen Kaboli ◽  
Kunpeng Yao ◽  
Di Feng ◽  
Gordon Cheng

Author(s):  
R. A. Oliveira ◽  
E. Khoramshahi ◽  
J. Suomalainen ◽  
T. Hakala ◽  
N. Viljanen ◽  
...  

The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.


2022 ◽  
Vol 22 (1) ◽  
pp. 1-20
Author(s):  
Di Zhang ◽  
Feng Xu ◽  
Chi-Man Pun ◽  
Yang Yang ◽  
Rushi Lan ◽  
...  

Artificial intelligence including deep learning and 3D reconstruction methods is changing the daily life of people. Now, an unmanned aerial vehicle that can move freely in the air and avoid harsh ground conditions has been commonly adopted as a suitable tool for 3D reconstruction. The traditional 3D reconstruction mission based on drones usually consists of two steps: image collection and offline post-processing. But there are two problems: one is the uncertainty of whether all parts of the target object are covered, and another is the tedious post-processing time. Inspired by modern deep learning methods, we build a telexistence drone system with an onboard deep learning computation module and a wireless data transmission module that perform incremental real-time dense reconstruction of urban cities by itself. Two technical contributions are proposed to solve the preceding issues. First, based on the popular depth fusion surface reconstruction framework, we combine it with a visual-inertial odometry estimator that integrates the inertial measurement unit and allows for robust camera tracking as well as high-accuracy online 3D scan. Second, the capability of real-time 3D reconstruction enables a new rendering technique that can visualize the reconstructed geometry of the target as navigation guidance in the HMD. Therefore, it turns the traditional path-planning-based modeling process into an interactive one, leading to a higher level of scan completeness. The experiments in the simulation system and our real prototype demonstrate an improved quality of the 3D model using our artificial intelligence leveraged drone system.


2013 ◽  
Vol 479-480 ◽  
pp. 897-900 ◽  
Author(s):  
Ji Hun Park

This paper presents a new outline contour generation method to track a rigid body in single video stream taken using a varying focal length and moving camera. We assume feature points and background eliminated images are provided, and we get different views of a tracked object when the object is stationary. Using different views of a tracked object, we volume-reconstruct a 3D model body after 3D scene analysis. For computing camera parameters and target object movement for a scene with a moving target object, we use fixed feature background points, and convert as a parameter optimization problem solving. Performance index for parameter optimization is minimizing feature point errors as well as outline contour difference between reconstructed 3D model and background eliminated tracked object. The proposed method is tested using an input image set.


Sign in / Sign up

Export Citation Format

Share Document