scholarly journals Multi-layered optimal navigation system for quadrotor UAV

2019 ◽  
Vol 92 (2) ◽  
pp. 145-155
Author(s):  
Kheireddine Choutri ◽  
Mohand Lagha ◽  
Laurent Dala

Purpose This paper aims to propose a new multi-layered optimal navigation system that jointly optimizes the energy consumption, improves the robustness and raises the performance of a quadrotor unmanned aerial vehicle (UAV). Design/methodology/approach The proposed system is designed as a multi-layered system. First, the control architecture layer links the input and the output spaces via quaternion-based differential flatness equations. Then, the trajectory generation layer determines the optimal reference path and avoids obstacles to secure the UAV from collisions. Finally, the control layer allows the quadrotor to track the generated path and guarantees the stability using a double loop non-linear optimal backstepping controller (OBS). Findings All the obtained results are confirmed using several scenarios in different situations to prove the accuracy, energy optimization and the robustness of the designed system. Practical implications The proposed controllers are easily implementable on-board and are computationally efficient. Originality/value The originality of this research is the design of a multi-layered optimal navigation system for quadrotor UAV. The proposed control architecture presents a direct relation between the states and their derivatives, which then simplifies the trajectory generation problem. Furthermore, the derived differentially flat equations allow optimization to occur within the output space as opposed to the control space. This is beneficial because constraints such as obstacle avoidance occur in the output space; hence, the computation time for constraint handling is reduced. For the OBS, the novelty is that all controller parameters are derived using the multi-objective genetic algorithm (MO-GA) that optimizes all the quadrotor state’s cost functions jointly.

Author(s):  
Ramzi Ben Ayed ◽  
Stéphane Brisset

Purpose – The aim of this paper is to reduce the evaluations number of the fine model within the output space mapping (OSM) technique in order to reduce their computing time. Design/methodology/approach – In this paper, n-level OSM is proposed and expected to be even faster than the conventional OSM. The proposed algorithm takes advantages of the availability of n models of the device to optimize, each of them representing an optimal trade-off between the model error and its computation time. Models with intermediate characteristics between the coarse and fine models are inserted within the proposed algorithm to reduce the number of evaluations of the consuming time model and then the computing time. The advantages of the algorithm are highlighted on the optimization problem of superconducting magnetic energy storage (SMES). Findings – A major computing time gain equals to three is achieved using the n-level OSM algorithm instead of the conventional OSM technique on the optimization problem of SMES. Originality/value – The originality of this paper is to investigate several models with different granularities within OSM algorithm in order to reduce its computing time without decreasing the performance of the conventional strategy.


2020 ◽  
Vol 8 (3) ◽  
pp. 225-238
Author(s):  
David J. Talarico ◽  
Aaron Mazzeo ◽  
Mitsunori Denda

PurposeAdvancements in aerospace technologies, which rely on unsteady fluid dynamics, are being hindered by a lack of easy to use, computationally efficient unsteady computational fluid dynamics (CFD) software. Existing CFD platforms are capable of handling unsteady flapping, but the time, money and expertise required to run even a basic flapping simulation make design iteration and optimization prohibitively expensive for the average researcher.Design/methodology/approachIn the present paper, a remedy to model the effects of viscosity is introduced to the original vortex method, in which the pitching moment amplitude grew over time for simulations involving multiple flapping cycles. The new approach described herein lumps far-field wake vortices to mimic the vortex decay, which is shown to improve the accuracy of the solution while keeping the pitching moment amplitude under control, especially for simulations involving many flapping cycles.FindingsIn addition to improving the accuracy of the solution, the new method greatly reduces the computation time for simulations involving many flapping cycles. The solution of the original vortex method and the new method are compared to published Navier–Stokes solver data and show very good agreement.Originality/valueBy utilizing a novel unsteady vortex method, which has been designed specifically to handle the highly unsteady flapping wing problems, it has been shown that the time to compute a solution is reduced by several orders of magnitude (Denda et al., 2016). Despite the success of the vortex method, especially for a small number of flapping cycles, the solution deteriorates as the number of flapping cycles increases due to the inherent lack of viscosity in the vortex method.


2019 ◽  
Vol 13 (2) ◽  
pp. 174-180
Author(s):  
Poonam Sharma ◽  
Ashwani Kumar Dubey ◽  
Ayush Goyal

Background: With the growing demand of image processing and the use of Digital Signal Processors (DSP), the efficiency of the Multipliers and Accumulators has become a bottleneck to get through. We revised a few patents on an Application Specific Instruction Set Processor (ASIP), where the design considerations are proposed for application-specific computing in an efficient way to enhance the throughput. Objective: The study aims to develop and analyze a computationally efficient method to optimize the speed performance of MAC. Methods: The work presented here proposes the design of an Application Specific Instruction Set Processor, exploiting a Multiplier Accumulator integrated as the dedicated hardware. This MAC is optimized for high-speed performance and is the application-specific part of the processor; here it can be the DSP block of an image processor while a 16-bit Reduced Instruction Set Computer (RISC) processor core gives the flexibility to the design for any computing. The design was emulated on a Xilinx Field Programmable Gate Array (FPGA) and tested for various real-time computing. Results: The synthesis of the hardware logic on FPGA tools gave the operating frequencies of the legacy methods and the proposed method, the simulation of the logic verified the functionality. Conclusion: With the proposed method, a significant improvement of 16% increase in throughput has been observed for 256 steps iterations of multiplier and accumulators on an 8-bit sample data. Such an improvement can help in reducing the computation time in many digital signal processing applications where multiplication and addition are done iteratively.


2016 ◽  
Vol 76 (4) ◽  
pp. 512-531 ◽  
Author(s):  
Xiaoguang Feng ◽  
Dermot Hayes

Purpose Portfolio risk in crop insurance due to the systemic nature of crop yield losses has inhibited the development of private crop insurance markets. Government subsidy or reinsurance has therefore been used to support crop insurance programs. The purpose of this paper is to investigate the possibility of converting systemic crop yield risk into “poolable” risk. Specifically, this study examines whether it is possible to remove the co-movement as well as tail dependence of crop yield variables by enlarging the risk pool across different crops and countries. Design/methodology/approach Hierarchical Kendall copula (HKC) models are used to model potential non-linear correlations of the high-dimensional crop yield variables. A Bayesian estimation approach is applied to account for estimation risk in the copula parameters. A synthetic insurance portfolio is used to evaluate the systemic risk and diversification effect. Findings The results indicate that the systemic nature – both positive correlation and lower tail dependence – of crop yield risks can be eliminated by combining crop insurance policies across crops and countries. Originality/value The study applies the HKC in the context of agricultural risks. Compared to other advanced copulas, the HKC achieves both flexibility and parsimony. The flexibility of the HKC makes it appropriate to precisely represent various correlation structures of crop yield risks while the parsimony makes it computationally efficient in modeling high-dimensional correlation structure.


Author(s):  
Pavel Karban ◽  
David Pánek ◽  
Ivo Doležel

Purpose A novel technique for control of complex physical processes based on the solution of their sufficiently accurate models is presented. The technique works with the model order reduction (MOR), which significantly accelerates the solution at a still acceptable uncertainty. Its advantages are illustrated with an example of induction brazing. Design/methodology/approach The complete mathematical model of the above heat treatment process is presented. Considering all relevant nonlinearities, the numerical model is reduced using the orthogonal decomposition and solved by the finite element method (FEM). It is cheap compared with classical FEM. Findings The proposed technique is applicable in a wide variety of linear and weakly nonlinear problems and exhibits a good degree of robustness and reliability. Research limitations/implications The quality of obtained results strongly depends on the temperature dependencies of material properties and degree of nonlinearities involved. In case of multiphysics problems characterized by low nonlinearities, the results of solved problems differ only negligibly from those solved on the full model, but the computation time is lower by two and more orders. Yet, however, application of the technique in problems with stronger nonlinearities was not fully evaluated. Practical implications The presented model and methodology of its solution may represent a basis for design of complex technologies connected with induction-based heat treatment of metal materials. Originality/value Proposal of a sophisticated methodology for solution of complex multiphysics problems established the MOR technology that significantly accelerates their solution at still acceptable errors.


Sensor Review ◽  
2018 ◽  
Vol 38 (3) ◽  
pp. 369-375 ◽  
Author(s):  
Sathya D. ◽  
Ganesh Kumar P.

PurposeThis study aims to provide a secured data aggregation with reduced energy consumption in WSN. Data aggregation is the process of reducing communication overhead in wireless sensor networks (WSNs). Presently, securing data aggregation is an important research issue in WSNs due to two facts: sensor nodes deployed in the sensitive and open environment are easily targeted by adversaries, and the leakage of aggregated data causes damage in the networks, and these data cannot be retrieved in a short span of time. Most of the traditional cryptographic algorithms provide security for data aggregation, but they do not reduce energy consumption.Design/methodology/approachNowadays, the homomorphic cryptosystem is used widely to provide security with low energy consumption, as the aggregation is performed on the ciphertext without decryption at the cluster head. In the present paper, the Paillier additive homomorphic cryptosystem and Bonehet al.’s aggregate signature method are used to encrypt and to verify aggregate data at the base station.FindingsThe combination of the two algorithms reduces computation time and energy consumption when compared with the state-of-the-art techniques.Practical implicationsThe secured data aggregation is useful in health-related applications, military applications, etc.Originality/valueThe new combination of encryption and signature methods provides confidentiality and integrity. In addition, it consumes less computation time and energy consumption than existing methods.


2019 ◽  
Vol 25 (9) ◽  
pp. 1482-1492
Author(s):  
Tong Wu ◽  
Andres Tovar

Purpose This paper aims to establish a multiscale topology optimization method for the optimal design of non-periodic, self-supporting cellular structures subjected to thermo-mechanical loads. The result is a hierarchically complex design that is thermally efficient, mechanically stable and suitable for additive manufacturing (AM). Design/methodology/approach The proposed method seeks to maximize thermo-mechanical performance at the macroscale in a conceptual design while obtaining maximum shear modulus for each unit cell at the mesoscale. Then, the macroscale performance is re-estimated, and the mesoscale design is updated until the macroscale performance is satisfied. Findings A two-dimensional Messerschmitt Bolkow Bolhm (MBB) beam withstanding thermo-mechanical load is presented to illustrate the proposed design method. Furthermore, the method is implemented to optimize a three-dimensional injection mold, which is successfully prototyped using 420 stainless steel infiltrated with bronze. Originality/value By developing a computationally efficient and manufacturing friendly inverse homogenization approach, the novel multiscale design could generate porous molds which can save up to 30 per cent material compared to their solid counterpart without decreasing thermo-mechanical performance. Practical implications This study is a useful tool for the designer in molding industries to reduce the cost of the injection mold and take full advantage of AM.


Sensor Review ◽  
2015 ◽  
Vol 35 (4) ◽  
pp. 389-400 ◽  
Author(s):  
Hongyu Zhao ◽  
Zhelong Wang ◽  
Qin Gao ◽  
Mohammad Mehedi Hassan ◽  
Abdulhameed Alelaiwi

Purpose – The purpose of this paper is to develop an online smoothing zero-velocity-update (ZUPT) method that helps achieve smooth estimation of human foot motion for the ZUPT-aided inertial pedestrian navigation system. Design/methodology/approach – The smoothing ZUPT is based on a Rauch–Tung–Striebel (RTS) smoother, using a six-state Kalman filter (KF) as the forward filter. The KF acts as an indirect filter, which allows the sensor measurement error and position error to be excluded from the error state vector, so as to reduce the modeling error and computational cost. A threshold-based strategy is exploited to verify the detected ZUPT periods, with the threshold parameter determined by a clustering algorithm. A quantitative index is proposed to give a smoothness estimate of the position data. Findings – Experimental results show that the proposed method can improve the smoothness, robustness, efficiency and accuracy of pedestrian navigation. Research limitations/implications – Because of the chosen smoothing algorithm, a delay no longer than one gait cycle is introduced. Therefore, the proposed method is suitable for applications with soft real-time constraints. Practical implications – The paper includes implications for the smooth estimation of most types of pedal locomotion that are achieved by legged motion, by using a sole foot-mounted commercial-grade inertial sensor. Originality/value – This paper helps realize smooth transitions between swing and stance phases, helps enable continuous correction of navigation errors during the whole gait cycle, helps achieve robust detection of gait phases and, more importantly, requires lower computational cost.


Sign in / Sign up

Export Citation Format

Share Document