scholarly journals Fixed Wing Aircraft Automatic Landing with the Use of a Dedicated Ground Sign System

Aerospace ◽  
2021 ◽  
Vol 8 (6) ◽  
pp. 167
Author(s):  
Bartłomiej Brukarczyk ◽  
Dariusz Nowak ◽  
Piotr Kot ◽  
Tomasz Rogalski ◽  
Paweł Rzucidło

The paper presents automatic control of an aircraft in the longitudinal channel during automatic landing. There are two crucial components of the system presented in the paper: a vision system and an automatic landing system. The vision system processes pictures of dedicated on-ground signs which appear to an on-board video camera to determine a glide path. Image processing algorithms used by the system were implemented into an embedded system and tested under laboratory conditions according to the hardware-in-the-loop method. An output from the vision system was used as one of the input signals to an automatic landing system. The major components are control algorithms based on the fuzzy logic expert system. They were created to imitate pilot actions while landing the aircraft. Both systems were connected with one another for cooperation and to control an aircraft model in a simulation environment. Selected results of tests presenting control efficiency and precision are shown in the final section of the paper.

2021 ◽  
Author(s):  
Srivatsan Krishnan ◽  
Behzad Boroujerdian ◽  
William Fu ◽  
Aleksandra Faust ◽  
Vijay Janapa Reddi

AbstractWe introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies’ performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $$40\%$$ 40 % longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute’s choice affects the aerial robot’s performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at: https://github.com/harvard-edge/AirLearning.


2020 ◽  
Author(s):  
Pengcheng Wang ◽  
Zenghong Ma ◽  
Xiaoqiang Du ◽  
Wenwu Lu ◽  
Wensong Xing ◽  
...  

Author(s):  
P. P. Kazakevich ◽  
A. N. Yurin ◽  
G. А. Prokopovich

The most rational method for identifying the quality of fruits is the optical method using PPE, which has the accuracy and stability of measurement, as well as distance and high productivity. The paper presents classification of fruit quality recognition systems and substantiates the design and technological scheme of the vision system for sorting them, consisting of an optical module with installed structural illumination and a video camera, an electronic control unit with an interface and actuators for the sorter and conveyor for fruits. In the course of the study, a single-stream type of fruit flow in PPE with forced rotation was substantiated, a structural and technological scheme of an STZ with a feeding conveyor, an optical module and a control unit, an algorithm for functioning of the STZ software was developed based on algorithm for segmentation of fruit colors, tracking algorithm, etc. deep learning ANN, which provide recognition of the size and color of fruits, as well as damage from mechanical stress, pests and diseases. The developed STZ has been introduced into the processing line for sorting and packing apples, LSP-4 has successfully passed preliminary tests and production tests at OJSC Ostromechevo. In the course of preliminary tests of the LSP-4 line, it was found that it provided fruit recognition with a probability of at least 95%, while the labor productivity made 2.5 t/h.


Author(s):  
Abouzahir Mohamed ◽  
Elouardi Abdelhafid ◽  
Bouaziz Samir ◽  
Latif Rachid ◽  
Tajer Abdelouahed

The improved particle filter based simultaneous localization and mapping (SLAM) has been developed for many robotic applications. The main purpose of this article is to demonstrate that recent heterogeneous architectures can be used to implement the FastSLAM2.0 and can greatly help to design embedded systems based robot applications and autonomous navigation. The algorithm is studied, optimized and evaluated with a real dataset using different sensors data and a hardware in the loop (HIL) method. Authors have implemented the algorithm on a system based embedded applications. Results demonstrate that an optimized FastSLAM2.0 algorithm provides a consistent localization according to a reference. Such systems are suitable for real time SLAM applications.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3542 ◽  
Author(s):  
Eleftherios Lygouras ◽  
Nicholas Santavas ◽  
Anastasios Taitzoglou ◽  
Konstantinos Tarchanidis ◽  
Athanasios Mitropoulos ◽  
...  

Unmanned aerial vehicles (UAVs) play a primary role in a plethora of technical and scientific fields owing to their wide range of applications. In particular, the provision of emergency services during the occurrence of a crisis event is a vital application domain where such aerial robots can contribute, sending out valuable assistance to both distressed humans and rescue teams. Bearing in mind that time constraints constitute a crucial parameter in search and rescue (SAR) missions, the punctual and precise detection of humans in peril is of paramount importance. The paper in hand deals with real-time human detection onboard a fully autonomous rescue UAV. Using deep learning techniques, the implemented embedded system was capable of detecting open water swimmers. This allowed the UAV to provide assistance accurately in a fully unsupervised manner, thus enhancing first responder operational capabilities. The novelty of the proposed system is the combination of global navigation satellite system (GNSS) techniques and computer vision algorithms for both precise human detection and rescue apparatus release. Details about hardware configuration as well as the system’s performance evaluation are fully discussed.


2019 ◽  
Vol 20 (8) ◽  
pp. 490-497
Author(s):  
V. P. Noskov ◽  
I. O. Kiselev

The actual tasks of 3D-reconstruction of the industrial-urban environment and navigation models are considered by solving the identification of textured linear objects in the process of movement according to the onboard complex and technical vision system consisting of a mutually adjusted 3D laser sensor and a video camera with a common viewing area. For a complete solution of the navigation task (determination of three linear and three angular coordinates of the control object), it is necessary to select and identify at least three mutually non-parallel flat objects in the process of moving in a sequence of point clouds formed by a 3D laser sensor. In the case of the allocation of less than three flat objects (for example, in environments subjected to destruction), the navigation problem is not fully solved (not all coordinates are determined unambiguously, and some coordinates are related by linear or non-linear dependencies). In these cases, it is proposed to additionally use the texture of the selected flat objects formed by the video camera. In the paper is given the analysis of the features of the solution of the navigation problem is carried out depending on the number of allocated and identifiable textured linear objects in the current integrated images and algorithms for solving the navigation problem are evaluated for selecting and identifying the process of movement of one textured linear object and of two textured non-parallel linear objects. It is shown that in the first case, the use of texture makes it possible to reduce the solution of the navigational problem to a three-dimensional one, and in the second case to a one-dimensional optimization problem (finding the global optimum of a functional three and one variable, respectively). The proposed algorithms for processing complexed images provide a complete solution to the navigation task even if less than three linear objects are selected, which significantly increases the reliability of solving the navigation task and building an environmental model even in industrial-urban environments that have been destroyed, and therefore, the reliability and survivability of the ground ones and airborne robotic tools in autonomous modes of movement. The results of the corresponding software and hardware solutions in real industrial-urban environments, confirmed the accuracy and effectiveness of the proposed algorithms.


Author(s):  
Tomás Serrano-Ramírez ◽  
Ninfa del Carmen Lozano-Rincón ◽  
Arturo Mandujano-Nava ◽  
Yosafat Jetsemaní Sámano-Flores

Computer vision systems are an essential part in industrial automation tasks such as: identification, selection, measurement, defect detection and quality control in parts and components. There are smart cameras used to perform tasks, however, their high acquisition and maintenance cost is restrictive. In this work, a novel low-cost artificial vision system is proposed for classifying objects in real time, using the Raspberry Pi 3B + embedded system, a Web camera and the Open CV artificial vision library. The suggested technique comprises the training of a supervised classification system of the Haar Cascade type, with image banks of the object to be recognized, subsequently generating a predictive model which is put to the test with real-time detection, as well as the calculation for the prediction error. This seeks to build a powerful vision system, affordable and also developed using free software.


SIMULATION ◽  
2019 ◽  
Vol 96 (2) ◽  
pp. 169-183
Author(s):  
Saumya R Sahoo ◽  
Shital S Chiddarwar

Omnidirectional robots offer better maneuverability and a greater degree of freedom over conventional wheel mobile robots. However, the design of their control system remains a challenge. In this study, a real-time simulation system is used to design and develop a hardware-in-the-loop (HIL) simulation platform for an omnidirectional mobile robot using bond graphs and a flatness-based controller. The control input from the simulation model is transferred to the robot hardware through an Arduino microcontroller input board. For feedback to the simulation model, a Kinect-based vision system is used. The developed controller, the Kinect-based vision system, and the HIL configuration are validated in the HIL simulation-based environment. The results confirm that the proposed HIL system can be an efficient tool for verifying the performance of the hardware and simulation designs of flatness-based control systems for omnidirectional mobile robots.


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1116 ◽  
Author(s):  
Yushkova ◽  
Sanchez ◽  
de Castro ◽  
Martínez-García

The use of Hardware-in-the-Loop (HIL) systems implemented in Field Programmable Gate Arrays (FPGAs) is constantly increasing because of its advantages compared to traditional simulation techniques. This increase in usage has caused new challenges related to the improvement of their performance and features like the number of output channels, while the price of HIL systems is diminishing. At present, the use of low-speed Digital-to-Analog Converters (DACs) is starting to be a commercial possibility because of two reasons. One is their lower price and the other is their lower pin count, which determines the number and price of the FPGAs that are necessary to handle those DACs. This paper compares four filtering approaches for providing suitable data to low-speed DACs, which help to filter high-speed input signals, discarding the need of using expensive high-speed DACS, and therefore decreasing the total cost of HIL implementations. Results show that the selection of the appropriate filter should be based on the type of the input waveform and the relative importance of the dynamics versus the area.


Sign in / Sign up

Export Citation Format

Share Document