scholarly journals Performance of a real-time sensor and processing system on a helicopter

Author(s):  
F. Kurz ◽  
D. Rosenbaum ◽  
O. Meynberg ◽  
G. Mattyus ◽  
P. Reinartz

A new optical real-time sensor system (4k system) on a helicopter is now ready to use for applications during disasters, mass events and traffic monitoring scenarios. The sensor was developed light-weighted, small with relatively cheap components in a pylon mounted sideward on a helicopter. The sensor architecture is finally a compromise between the required functionality, the development costs, the weight and the sensor size. Aboard processors are integrated in the 4k sensor system for orthophoto generation, for automatic traffic parameter extraction and for data downlinks. It is planned to add real-time processors for person detection and tracking, for DSM generation and for water detection. Equipped with the newest and most powerful off-the-shelf cameras available, a wide variety of viewing configurations with a frame rate of up to 12 Hz for the different applications is possible. Based on three cameras with 50 mm lenses which are looking in different directions, a maximal FOV of 104° is reachable; with 100 mm lenses a ground sampling distance of 3.5 cm is possible at a flight height of 500 m above ground. <br><br> In this paper, we present the first data sets and describe the technical components of the sensor. The effect of vibrations of the helicopter on the GNSS/IMU accuracy and on the 4k video quality is analysed. It can be shown, that if the helicopter hoovers the rolling shutter effect affects the 4k video quality drastically. The GNSS/IMU error is higher than the specified limit, which is mainly caused by the vibrations on the helicopter and the insufficient vibrational absorbers on the sensor board.

2021 ◽  
Vol 13 (4) ◽  
pp. 573
Author(s):  
Navaneeth Balamuralidhar ◽  
Sofia Tilon ◽  
Francesco Nex

We present MultEYE, a traffic monitoring system that can detect, track, and estimate the velocity of vehicles in a sequence of aerial images. The presented solution has been optimized to execute these tasks in real-time on an embedded computer installed on an Unmanned Aerial Vehicle (UAV). In order to overcome the limitation of existing object detection architectures related to accuracy and computational overhead, a multi-task learning methodology was employed by adding a segmentation head to an object detector backbone resulting in the MultEYE object detection architecture. On a custom dataset, it achieved 4.8% higher mean Average Precision (mAP) score, while being 91.4% faster than the state-of-the-art model and while being able to generalize to different real-world traffic scenes. Dedicated object tracking and speed estimation algorithms have been then optimized to track reliably objects from an UAV with limited computational effort. Different strategies to combine object detection, tracking, and speed estimation are discussed, too. From our experiments, the optimized detector runs at an average frame-rate of up to 29 frames per second (FPS) on frame resolution 512 × 320 on a Nvidia Xavier NX board, while the optimally combined detector, tracker and speed estimator pipeline achieves speeds of up to 33 FPS on an image of resolution 3072 × 1728. To our knowledge, the MultEYE system is one of the first traffic monitoring systems that was specifically designed and optimized for an UAV platform under real-world constraints.


2020 ◽  
Vol 13 (2) ◽  
pp. 32
Author(s):  
Hsu Myat Tin Swe ◽  
Hla Myo Tun ◽  
Maung Maung Latt

The paper mainly emphasizes on the control design for attitude and position based on real time color tracking system with image processing technique. The research problem in this study is to observe the high accuracy of the tracking system in image processing areas. The solution for this problem is to control the attitude and position of the object based on real time color tracking system. The objective of this study is to implement the image processing algorithms for autonomous tracking system. The specific objective of this study was fulfilled the experimental studies for contribution of real time color tracking for motion detection system in reality based on this study. This system is used the high performance camera to improve the enactment of tracking of a target and estimation of a motion. An image processing system consists of a light source to illuminate the sense, a sensor system, an interface between the sensor system and the computer. Then, color component analysis is used for color tracking system. MATLAB is competently used for tracking the ball and controlling the attitude and position of the ball.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Cheng-Jian Lin ◽  
Shiou-Yun Jeng ◽  
Hong-Wei Lioa

In recent years, vehicle detection and classification have become essential tasks of intelligent transportation systems, and real-time, accurate vehicle detection from image and video data for traffic monitoring remains challenging. The most noteworthy challenges are real-time system operation to accurately locate and classify vehicles in traffic flows and working around total occlusions that hinder vehicle tracking. For real-time traffic monitoring, we present a traffic monitoring approach that overcomes the abovementioned challenges by employing convolutional neural networks that utilize You Only Look Once (YOLO). A real-time traffic monitoring system has been developed, and it has attracted significant attention from traffic management departments. Digitally processing and analyzing these videos in real time is crucial for extracting reliable data on traffic flow. Therefore, this study presents a real-time traffic monitoring system based on a virtual detection zone, Gaussian mixture model (GMM), and YOLO to increase the vehicle counting and classification efficiency. GMM and a virtual detection zone are used for vehicle counting, and YOLO is used to classify vehicles. Moreover, the distance and time traveled by a vehicle are used to estimate the speed of the vehicle. In this study, the Montevideo Audio and Video Dataset (MAVD), the GARM Road-Traffic Monitoring data set (GRAM-RTM), and our collection data sets are used to verify the proposed method. Experimental results indicate that the proposed method with YOLOv4 achieved the highest classification accuracy of 98.91% and 99.5% in MAVD and GRAM-RTM data sets, respectively. Moreover, the proposed method with YOLOv4 also achieves the highest classification accuracy of 99.1%, 98.6%, and 98% in daytime, night time, and rainy day, respectively. In addition, the average absolute percentage error of vehicle speed estimation with the proposed method is about 7.6%.


2011 ◽  
Vol 328-330 ◽  
pp. 2234-2237
Author(s):  
Dong Sheng Liang ◽  
Zhao Hui Liu ◽  
Wen Liu

Achieving the detection and tracking of moving targets has been widely applied in all fields of today's society. Because of the shortcomings of traditional video tracking system, this paper proposes a novel method for designing video processing system based on hardware design of FPGA and DSP, and moving target in video can be detected and tracked by this system. In this system, DSP as the core of the system, it mainly completes the processing algorithms of video and image data, FPGA as a coprocessor, responsible for the completion of the processing of external data and logic. The hardware structure, link configuration, program code and other aspects of system are optimized. Finally, through the experiment, the input frame rate of video is 40frames/s, and the image resolution is 512pixels × 512pixels, median 16bites quantitative image sequence, the system can complete the relevant real-time detection and tracking algorithm and extract targets position of image sequences correctly. The results show that the advantage is that this system has powerful operation speed, real time, high accuracy and stability.


Transport ◽  
2008 ◽  
Vol 23 (2) ◽  
pp. 144-149 ◽  
Author(s):  
Jonas Daunoras ◽  
Vaclovas Bagdonas ◽  
Vytautas Gargasas

The article analyses the problem of further development of geographic informational systems with traffic monitoring channel (GIS‐TMC) in order to present the road users with effective information about the fastest (the shortest in respect of time) routes and thus to improve the use of existing city transport infrastructure. To solve this task it is suggested to create dynamic (automatically updated in real time) street passing duration base, for support of which a city transport monitoring system operating in real time is necessary, consisting of a network of sensors, a data collection communications system and a data processing system. In the article it is shown that to predict the street passing duration it is enough to measure speed of transport in the characteristic points of the street. Measurements of traffic density do not significantly improve accuracy of forecasting the street passing time. Analytical formulas are presented meant to forecast the street passing time.


Sign in / Sign up

Export Citation Format

Share Document