scholarly journals A Hybrid-Driven Optimization Framework for Fixed-Wing UAV Maneuvering Flight Planning

Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2330
Author(s):  
Renshan Zhang ◽  
Su Cao ◽  
Kuang Zhao ◽  
Huangchao Yu ◽  
Yongyang Hu

Performing autonomous maneuvering flight planning and optimization remains a challenge for unmanned aerial vehicles (UAVs), especially for fixed-wing UAVs due to its high maneuverability and model complexity. A novel hybrid-driven fixed-wing UAV maneuver optimization framework, inspired by apprenticeship learning and nonlinear programing approaches, is proposed in this paper. The work consists of two main aspects: (1) Identifying the model parameters for a certain fixed-wing UAV based on the demonstrated flight data performed by human pilot. Then, the features of the maneuvers can be described by the positional/attitude/compound key-frames. Eventually, each of the maneuvers can be decomposed into several motion primitives. (2) Formulating the maneuver planning issue into a minimum-time optimization problem, a novel nonlinear programming algorithm was developed, which was unnecessary to determine the exact time for the UAV to pass by the key-frames. The simulation results illustrate the effectiveness of the proposed framework in several scenarios, as both the preservation of geometric features and the minimization of maneuver times were ensured.

Author(s):  
Alexander D. Bekman ◽  
Sergey V. Stepanov ◽  
Alexander A. Ruchkin ◽  
Dmitry V. Zelenin

The quantitative evaluation of producer and injector well interference based on well operation data (profiles of flow rates/injectivities and bottomhole/reservoir pressures) with the help of CRM (Capacitance-Resistive Models) is an optimization problem with large set of variables and constraints. The analytical solution cannot be found because of the complex form of the objective function for this problem. Attempts to find the solution with stochastic algorithms take unacceptable time and the result may be far from the optimal solution. Besides, the use of universal (commercial) optimizers hides the details of step by step solution from the user, for example&nbsp;— the ambiguity of the solution as the result of data inaccuracy.<br> The present article concerns two variants of CRM problem. The authors present a new algorithm of solving the problems with the help of “General Quadratic Programming Algorithm”. The main advantage of the new algorithm is the greater performance in comparison with the other known algorithms. Its other advantage is the possibility of an ambiguity analysis. This article studies the conditions which guarantee that the first variant of problem has a unique solution, which can be found with the presented algorithm. Another algorithm for finding the approximate solution for the second variant of the problem is also considered. The method of visualization of approximate solutions set is presented. The results of experiments comparing the new algorithm with some previously known are given.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2775
Author(s):  
Tsubasa Takano ◽  
Takumi Nakane ◽  
Takuya Akashi ◽  
Chao Zhang

In this paper, we propose a method to detect Braille blocks from an egocentric viewpoint, which is a key part of many walking support devices for visually impaired people. Our main contribution is to cast this task as a multi-objective optimization problem and exploits both the geometric and the appearance features for detection. Specifically, two objective functions were designed under an evolutionary optimization framework with a line pair modeled as an individual (i.e., solution). Both of the objectives follow the basic characteristics of the Braille blocks, which aim to clarify the boundaries and estimate the likelihood of the Braille block surface. Our proposed method was assessed by an originally collected and annotated dataset under real scenarios. Both quantitative and qualitative experimental results show that the proposed method can detect Braille blocks under various environments. We also provide a comprehensive comparison of the detection performance with respect to different multi-objective optimization algorithms.


2020 ◽  
Vol 8 (2) ◽  
pp. 119
Author(s):  
Cokorda Gde Teresna Jaya ◽  
I Gede Arta Wibawa

Certificate is one of the documents that can be used as evidence of ownership or an event. For example, when certificate used as requirement to participate in an event. If a document is made as a requirement, of course the file verification process will be done. Seeing the time optimization problem when verifying the file, the authors carry out research by segmenting important data contained in a certificate as an initial step in the development of an automatic document verification system. The segmentation process carried out in this study uses the Connected Component Labeling method in determining the area to be segmented and Automatic Cropping to cut the results of the segmentation process. By using these two methods obtained an accuracy of 60% with a total of 15 pieces of test data


2021 ◽  
Author(s):  
Mikhail Sviridov ◽  
◽  
Anton Mosin ◽  
Sergey Lebedev ◽  
Ron Thompson ◽  
...  

While proactive geosteering, special inversion algorithms are used to process the readings of logging-while-drilling resistivity tools in real-time and provide oil field operators with formation models to make informed steering decisions. Currently, there is no industry standard for inversion deliverables and corresponding quality indicators because major tool vendors develop their own device-specific algorithms and use them internally. This paper presents the first implementation of vendor-neutral inversion approach applicable for any induction resistivity tool and enabling operators to standardize the efficiency of various geosteering services. The necessity of such universal inversion approach was inspired by the activity of LWD Deep Azimuthal Resistivity Services Standardization Workgroup initiated by SPWLA Resistivity Special Interest Group in 2016. Proposed inversion algorithm utilizes a 1D layer-cake formation model and is performed interval-by-interval. The following model parameters can be determined: horizontal and vertical resistivities of each layer, positions of layer boundaries, and formation dip. The inversion can support arbitrary deep azimuthal induction resistivity tool with coaxial, tilted, or orthogonal transmitting and receiving antennas. The inversion is purely data-driven; it works in automatic mode and provides fully unbiased results obtained from tool readings only. The algorithm is based on statistical reversible-jump Markov chain Monte Carlo method that does not require any predefined assumptions about the formation structure and enables searching of models explaining the data even if the number of layers in the model is unknown. To globalize search, the algorithm runs several Markov chains capable of exchanging their states between one another to move from the vicinity of local minimum to more perspective domain of model parameter space. While execution, the inversion keeps all models it is dealing with to estimate the resolution accuracy of formation parameters and generate several quality indicators. Eventually, these indicators are delivered together with recovered resistivity models to help operators with the evaluation of inversion results reliability. To ensure high performance of the inversion, a fast and accurate semi-analytical forward solver is employed to compute required responses of a tool with specific geometry and their derivatives with respect to any parameter of multi-layered model. Moreover, the reliance on the simultaneous evolution of multiple Markov chains makes the algorithm suitable for parallel execution that significantly decreases the computational time. Application of the proposed inversion is shown on a series of synthetic examples and field case studies such as navigating the well along the reservoir roof or near the oil-water-contact in oil sands. Inversion results for all scenarios confirm that the proposed algorithm can successfully evaluate formation model complexity, recover model parameters, and quantify their uncertainty within a reasonable computational time. Presented vendor-neutral stochastic approach to data processing leads to the standardization of the inversion output including the resistivity model and its quality indicators that helps operators to better understand capabilities of tools from different vendors and eventually make more confident geosteering decisions.


Author(s):  
Siba Monther Yousif ◽  
Roslina M. Sidek ◽  
Anwer Sabah Mekki ◽  
Nasri Sulaiman ◽  
Pooria Varahram

<span lang="EN-US">In this paper, a low-complexity model is proposed for linearizing power amplifiers with memory effects using the digital predistortion (DPD) technique. In the proposed model, the linear, low-order nonlinear and high-order nonlinear memory effects are computed separately to provide flexibility in controlling the model parameters so that both high performance and low model complexity can be achieved. The performance of the proposed model is assessed based on experimental measurements of a commercial class AB power amplifier by applying a single-carrier wideband code division multiple access (WCDMA) signal. The linearity performance and the model complexity of the proposed model are compared with the memory polynomial (MP) model and the DPD with single-feedback model. The experimental results show that the proposed model outperforms the latter model by 5 dB in terms of adjacent channel leakage power ratio (ACLR) with comparable complexity. Compared to MP model, the proposed model shows improved ACLR performance by 10.8 dB with a reduction in the complexity by 17% in terms of number of floating-point operations (FLOPs) and 18% in terms of number of model coefficients.</span>


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257958
Author(s):  
Miguel Navascués ◽  
Costantino Budroni ◽  
Yelena Guryanova

In the context of epidemiology, policies for disease control are often devised through a mixture of intuition and brute-force, whereby the set of logically conceivable policies is narrowed down to a small family described by a few parameters, following which linearization or grid search is used to identify the optimal policy within the set. This scheme runs the risk of leaving out more complex (and perhaps counter-intuitive) policies for disease control that could tackle the disease more efficiently. In this article, we use techniques from convex optimization theory and machine learning to conduct optimizations over disease policies described by hundreds of parameters. In contrast to past approaches for policy optimization based on control theory, our framework can deal with arbitrary uncertainties on the initial conditions and model parameters controlling the spread of the disease, and stochastic models. In addition, our methods allow for optimization over policies which remain constant over weekly periods, specified by either continuous or discrete (e.g.: lockdown on/off) government measures. We illustrate our approach by minimizing the total time required to eradicate COVID-19 within the Susceptible-Exposed-Infected-Recovered (SEIR) model proposed by Kissler et al. (March, 2020).


2021 ◽  
Vol 21 (8) ◽  
pp. 2447-2460
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (“actual”) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano, as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This “fuzzification” of simulated results yields improvements in targeted performance metrics relative to a length scale parameter at the expense of decreases in opposing metrics (e.g. fewer false negatives result in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision-making from simulated data.


2021 ◽  
Author(s):  
Stuart R. Mead ◽  
Jonathan Procter ◽  
Gabor Kereszturi

Abstract. The use of mass flow simulations in volcanic hazard zonation and mapping is often limited by model complexity (i.e. uncertainty in correct values of model parameters), a lack of model uncertainty quantification, and limited approaches to incorporate this uncertainty into hazard maps. When quantified, mass flow simulation errors are typically evaluated on a pixel-pair basis, using the difference between simulated and observed (actual) map-cell values to evaluate the performance of a model. However, these comparisons conflate location and quantification errors, neglecting possible spatial autocorrelation of evaluated errors. As a result, model performance assessments typically yield moderate accuracy values. In this paper, similarly moderate accuracy values were found in a performance assessment of three depth-averaged numerical models using the 2012 debris avalanche from the Upper Te Maari crater, Tongariro Volcano as a benchmark. To provide a fairer assessment of performance and evaluate spatial covariance of errors, we use a fuzzy set approach to indicate the proximity of similarly valued map cells. This fuzzification of simulated results yields improvements in targeted performance metrics relative to a length scale parameter, at the expense of decreases in opposing metrics (e.g. less false negatives results in more false positives) and a reduction in resolution. The use of this approach to generate hazard zones incorporating the identified uncertainty and associated trade-offs is demonstrated, and indicates a potential use for informed stakeholders by reducing the complexity of uncertainty estimation and supporting decision making from simulated data.


2021 ◽  
Author(s):  
Lilian Schuster ◽  
David Rounce ◽  
Fabien Maussion

&lt;p&gt;A recent large model intercomparison study (GlacierMIP) showed that differences between the glacier models is a dominant source of uncertainty for future glacier change projections, in particular in the first half of the century.&amp;#160; Each glacier model has their own unique set of process representations and climate forcing methodology, which makes it impossible to determine the model components that contribute most to the projection uncertainty. This study aims to improve our understanding of the sources of large scale glacier model uncertainty using the Open Global Glacier Model (OGGM), focussing on the surface mass balance (SMB) in a first step. We calibrate and run a set of interchangeable SMB model parameterizations (e.g. monthly vs. daily, constant vs. variable lapse rates, albedo, snowpack evolution and refreezing) under controlled boundary conditions. Based on ensemble approaches, we explore the influence of (i) the parameter calibration strategy and (ii) SMB model complexity on regional to global glacier change. These uncertainties are then put in relation to a qualitative selection of other model design choices, such as the forcing climate dataset and ice dynamics model parameters.&amp;#160;&lt;/p&gt;


Sensors ◽  
2019 ◽  
Vol 19 (9) ◽  
pp. 1993 ◽  
Author(s):  
Jun Liu ◽  
Wenxue Guan ◽  
Guangjie Han ◽  
Jun-Hong Cui ◽  
Lance Fiondella ◽  
...  

Deployment of surface-level gateways holds potential as an effective method to alleviate high-propagation delays and high-error probability in an underwater wireless sensor network (UWSN). This promise comes from reducing distances to underwater nodes and using radio waves to forward information to a control station. In an UWSN, a dynamic energy efficient surface-level gateway deployment is required to cope with the mobility of underwater nodes while considering the remote and three-dimensional nature of marine space. In general, deployment problems are usually modeled as an optimization problem to satisfy multiple constraints given a set of parameters. One previously published static deployment optimization framework makes assumptions about network workload, routing, medium access control performance, and node mobility. However, in real underwater environments, all these parameters are dynamic. Therefore, the accuracy of performance estimates calculated through static UWSN deployment optimization framework tends to be limited by nature. This paper presents the Prediction-Assisted Dynamic Surface Gateway Placement (PADP) algorithm to maximize the coverage and minimize the average end-to-end delay of a mobile underwater sensor network over a specified period. PADP implements the Interacting Multiple Model (IMM) tracking scheme to predict the positions of sensor nodes. The deployment is determined based on both current and predicted positions of sensor nodes, which enables better coverage and shorter end-to-end delay. PADP uses a branch-and-cut approach to solve the optimization problem efficiently, and employs a disjoint-set data structure to ensure connectivity. Simulation results illustrate that PADP significantly outperforms a static gateway deployment scheme.


Sign in / Sign up

Export Citation Format

Share Document