error metric
Recently Published Documents


TOTAL DOCUMENTS

110
(FIVE YEARS 34)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Stephen Adams ◽  
Brian Bledsoe ◽  
Eric Stein

Abstract. Environmental streamflow management can improve the ecological health of streams by returning modified flows to more natural conditions. The Ecological Limits of Hydrologic Alteration (ELOHA) framework for developing regional environmental flow criteria has been implemented to reverse hydromodification across the heterogenous region of coastal southern California (So. CA) by focusing on two elements of the flow regime: streamflow permanence and flashiness. Within ELOHA, classification groups streams by hydrologic and geomorphic similarity to stratify flow-ecology relationships. Analogous grouping techniques are used by hydrologic modelers to facilitate streamflow prediction in ungaged basins (PUB) through regionalization. Most watersheds, including those needed for stream classification and environmental flow development, are ungaged. Furthermore, So. CA is a highly heterogeneous region spanning a gradient of urbanization, which presents a challenge for regionalizing ungaged basins. In this study, we develop a novel classification technique for PUB modeling that uses an inductive approach to group regional streams by modeled hydrologic similarity followed by deductively determining class membership with hydrologic model errors and watershed metrics. As a new type of classification, this “Hydrologic Model-based Classification” (HMC) prioritizes modeling accuracy, which in turn provides a means to improve model predictions in ungaged basins, while complementing traditional classifications and improving environmental flow management. HMC is developed by calibrating a regional catalog of process-based rainfall-runoff models, quantifying the hydrologic reciprocity of calibrated parameters that would be unknown in ungaged basins, and grouping sites according to hydrologic and physical similarity. HMC was applied to 25 USGS streamflow gages in the south coast region of California and was compared to other hybrid PUB approaches combining inductive and deductive classification. Using an Average Cluster Error metric, results show HMC provided the most hydrologically similar groups according to calibrated parameter reciprocity. Hydrologic Model-based Classification is relatively complex and time-consuming to implement, but it shows potential for advancing ungaged basin management. This study demonstrates the benefits of thorough stream classification using multiple approaches, and suggests that Hydrologic Model-based Classification has advantages for PUB and building the hydrologic foundation for environmental flow management.


Author(s):  
Nan Meng ◽  
Yun-Bin Zhao

AbstractSparse signals can be possibly reconstructed by an algorithm which merges a traditional nonlinear optimization method and a certain thresholding technique. Different from existing thresholding methods, a novel thresholding technique referred to as the optimal k-thresholding was recently proposed by Zhao (SIAM J Optim 30(1):31–55, 2020). This technique simultaneously performs the minimization of an error metric for the problem and thresholding of the iterates generated by the classic gradient method. In this paper, we propose the so-called Newton-type optimal k-thresholding (NTOT) algorithm which is motivated by the appreciable performance of both Newton-type methods and the optimal k-thresholding technique for signal recovery. The guaranteed performance (including convergence) of the proposed algorithms is shown in terms of suitable choices of the algorithmic parameters and the restricted isometry property (RIP) of the sensing matrix which has been widely used in the analysis of compressive sensing algorithms. The simulation results based on synthetic signals indicate that the proposed algorithms are stable and efficient for signal recovery.


2021 ◽  
Author(s):  
Shams Kalam ◽  
Mohammad Rasheed Khan ◽  
Rizwan Ahmed Khan

Abstract This investigation presents a powerful predictive model to determine crude oil formation volume factor (FVF) using state-of-the-art artificial intelligence (AI) techniques. FVF is a vital pressure-volume-temperature (PVT) parameter used to characterize hydrocarbon systems and is pivotal to reserves calculation and reservoir engineering studies. Ideally, FVF is measured at the laboratory scale; however, prognostic tools to evaluate this parameter can optimize time and cost estimates. The database utilized in this study is obtained from open literature and covers statistics of crude oils of the Middle East region. Multiple AI algorithms are considered, including Artificial Neural Networks (ANN) and Artificial Neural Fuzzy Inference Systems (ANFIS). Models are developed utilizing an optimization strategy for various parameters/hyper-parameters of the respective algorithms. Unique permutations and combinations for the number of perceptron and their resident layers is investigated to reach a solution that provides the most optimum output. These intelligent models are produced as a function of the parameters intrinsically affecting FVF; reservoir temperature, solution GOR, gas specific gravity, bubble point pressure, and crude oil API gravity. Comparative analysis of developed AI models is performed using visualization/statistical analysis, and the best model is pointed out. Finally, the mathematical equation extraction to determine FVF is accomplished with the respective weights and bias for the model presented. Graphical analysis is used to evaluate the performance of developed AI models. The results of scatter plots showed most of the points are lying on the 45 degree line. Moreover, during this study, an error metric is developed comprising of multiple analysis parameters; Average absolute percentage error (AAPE), Root Mean Squared Error (RMSE), coefficient of determination (R2). All models investigated are tested on an unseen dataset to prevent a biased model's development. Performance of the established AI models is gauged based on this error metric, demonstrating that ANN outperforms ANFIS with error within 1% of the measured PVT values. A computationally derived intelligent model provides the strongest predictive capabilities as it maps complex non-linear interactions between various input parameters leading to FVF.


2021 ◽  
Vol 11 (24) ◽  
pp. 11754
Author(s):  
Carlos Flores-Garrigós ◽  
Juan Vicent-Camisón ◽  
Juan J. Garcés-Iniesta ◽  
Emilio Soria-Olivas ◽  
Juan Gómez-Sanchís ◽  
...  

In ultra-high vacuum systems, obtaining the composition of a mass spectrum is often a challenging task due to the highly overlapping nature of the individual profiles of the gas species that contribute to that spectrum, as well as the high differences in terms of degree of contribution (several orders of magnitude). This problem is even more complex when not only the presence but also a quantitative estimation of the contribution (partial pressure) of each species is required. This paper aims at estimating the relative contribution of each species in a target mass spectrum by combining a state-of-the-art machine learning method (multilabel classifier) to obtain a pool of candidate species based on a threshold applied to the probability scores given by the classifier with a genetic algorithm that aims at finding the partial pressure at which each one of the species contributes to the target mass spectrum. For this purpose, we use a dataset of synthetically generated samples. We explore different acceptance thresholds for the generation of initial populations, and we establish comparative metrics against the most novel method to date for automatically obtaining partial pressure contributions. Our results show a clear advantage in terms of the integral error metric (up to 112 times lower for simpler spectra) and computational times (up to 4 times lower for complex spectra) in favor of the proposed method, which is considered a substantial improvement for this task.


2021 ◽  
Author(s):  
Mohammad Rasheed Khan ◽  
Zeeshan Tariq ◽  
Mohamed Mahmoud

Abstract Photoelectric factor (PEF) is one of functional parameters of a hydrocarbon reservoir that could provide invaluable data for reservoir characterization. Well logs are critical to formation evaluation processes; however, they are not always readily available due to unfeasible logging conditions. In addition, with call for efficiency in hydrocarbon E&P business, it has become imperative to optimize logging programs to acquire maximum data with minimal cost impact. As a result, the present study proposes an improved strategy for generating synthetic log by making a quantitative formulation between conventional well log data, rock mineralogical content and PEF. 230 data points were utilized to implement the machine learning (ML) methodology which is initiated by implementing a statistical analysis scheme. The input logs that are used for architecture establishment include the density and sonic logs. Moreover, rock mineralogical content (carbonate, quartz, clay) has been incorporated for model development which is strongly correlated to the PEF. At the next stage of this study, architecture of artificial neural networks (ANN) was developed and optimized to predict the PEF from conventional well log data. A sub-set of data points was used for ML model construction and another unseen set was employed to assess the model performance. Furthermore, a comprehensive error metrics analysis is used to evaluate performance of the proposed model. The synthetic PEF log generated using the developed ANN correlation is compared with the actual well log data available and demonstrate an average absolute percentage error less than 0.38. In addition, a comprehensive error metric analysis is presented which depicts coefficient of determination more than 0.99 and root mean squared error of only 0.003. The numerical analysis of the error metric point towards the robustness of the ANN model and capability to link mineralogical content with the PEF.


2021 ◽  
Vol 40 (5) ◽  
pp. 1-16
Author(s):  
Ran Zhang ◽  
Thomas Auzinger ◽  
Bernd Bickel

This article presents a method for designing planar multistable compliant structures. Given a sequence of desired stable states and the corresponding poses of the structure, we identify the topology and geometric realization of a mechanism—consisting of bars and joints—that is able to physically reproduce the desired multistable behavior. In order to solve this problem efficiently, we build on insights from minimally rigid graph theory to identify simple but effective topologies for the mechanism. We then optimize its geometric parameters, such as joint positions and bar lengths, to obtain correct transitions between the given poses. Simultaneously, we ensure adequate stability of each pose based on an effective approximate error metric related to the elastic energy Hessian of the bars in the mechanism. As demonstrated by our results, we obtain functional multistable mechanisms of manageable complexity that can be fabricated using 3D printing. Further, we evaluated the effectiveness of our method on a large number of examples in the simulation and fabricated several physical prototypes.


Energies ◽  
2021 ◽  
Vol 14 (19) ◽  
pp. 6274
Author(s):  
Colin Singleton ◽  
Peter Grindrod

We describe our approach to the Western Power Distribution (WPD) Presumed Open Data (POD) 6 MWh battery storage capacity forecasting competition, in which we finished second. The competition entails two distinct forecasting aims to maximise the daily evening peak reduction and using as much solar photovoltaic energy as possible. For the latter, we combine a Bayesian (MCMC) linear regression model with an average generation distribution. For the former, we introduce a new error metric that allows even a simple weighted average combined with a simple linear regression model to score very well using the competition performance metric.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5586
Author(s):  
Yi-Tun Lin ◽  
Graham D. Finlayson

Spectral reconstruction (SR) algorithms attempt to recover hyperspectral information from RGB camera responses. Recently, the most common metric for evaluating the performance of SR algorithms is the Mean Relative Absolute Error (MRAE)—an ℓ1 relative error (also known as percentage error). Unsurprisingly, the leading algorithms based on Deep Neural Networks (DNN) are trained and tested using the MRAE metric. In contrast, the much simpler regression-based methods (which actually can work tolerably well) are trained to optimize a generic Root Mean Square Error (RMSE) and then tested in MRAE. Another issue with the regression methods is—because in SR the linear systems are large and ill-posed—that they are necessarily solved using regularization. However, hitherto the regularization has been applied at a spectrum level, whereas in MRAE the errors are measured per wavelength (i.e., per spectral channel) and then averaged. The two aims of this paper are, first, to reformulate the simple regressions so that they minimize a relative error metric in training—we formulate both ℓ2 and ℓ1 relative error variants where the latter is MRAE—and, second, we adopt a per-channel regularization strategy. Together, our modifications to how the regressions are formulated and solved leads to up to a 14% increment in mean performance and up to 17% in worst-case performance (measured with MRAE). Importantly, our best result narrows the gap between the regression approaches and the leading DNN model to around 8% in mean accuracy.


Sign in / Sign up

Export Citation Format

Share Document