robust algorithms
Recently Published Documents


TOTAL DOCUMENTS

212
(FIVE YEARS 58)

H-INDEX

19
(FIVE YEARS 4)

Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7724
Author(s):  
Tao Zhang ◽  
Shuyu Sun

The thermodynamic properties of fluid mixtures play a crucial role in designing physically meaningful models and robust algorithms for simulating multi-component multi-phase flow in subsurface, which is needed for many subsurface applications. In this context, the equation-of-state-based flash calculation used to predict the equilibrium properties of each phase for a given fluid mixture going through phase splitting is a crucial component, and often a bottleneck, of multi-phase flow simulations. In this paper, a capillarity-wise Thermodynamics-Informed Neural Network is developed for the first time to propose a fast, accurate and robust approach calculating phase equilibrium properties for unconventional reservoirs. The trained model performs well in both phase stability tests and phase splitting calculations in a large range of reservoir conditions, which enables further multi-component multi-phase flow simulations with a strong thermodynamic basis.


2021 ◽  
Vol 7 (1) ◽  
pp. 60
Author(s):  
Ángel López-Oriona ◽  
Pierpaolo D’Urso ◽  
José A. Vilar ◽  
Borja Lafuente-Rego

Three robust algorithms for clustering multidimensional time series from the perspective of underlying processes are proposed. The methods are robust extensions of a fuzzy C-means model based on estimates of the quantile cross-spectral density. Robustness to the presence of anomalous elements is achieved by using the so-called metric, noise and trimmed approaches. Analyses from a wide simulation study indicate that the algorithms are substantially effective in coping with the presence of outlying series, clearly outperforming alternative procedures. The usefulness of the suggested methods is also highlighted by means of a specific application.


Energies ◽  
2021 ◽  
Vol 14 (22) ◽  
pp. 7496
Author(s):  
Iván Sanz-Gorrachategui ◽  
Pablo Pastor-Flores ◽  
Antonio Bono-Nuez ◽  
Cora Ferrer-Sánchez ◽  
Alejandro Guillén-Asensio ◽  
...  

Battery parameters such as State of Charge (SoC) and State of Health (SoH) are key to modern applications; thus, there is interest in developing robust algorithms for estimating them. Most of the techniques explored to this end rely on a battery model. As batteries age, their behavior starts differing from the models, so it is vital to update such models in order to be able to track battery behavior after some time in application. This paper presents a method for performing online battery parameter tracking by using the Extremum Seeking (ES) algorithm. This algorithm fits voltage waveforms by tuning the internal parameters of an estimation model and comparing the voltage output with the real battery. The goal is to estimate the electrical parameters of the battery model and to be able to obtain them even as batteries age, when the model behaves different than the cell. To this end, a simple battery model capable of capturing degradation and different tests have been proposed to replicate real application scenarios, and the performance of the ES algorithm in such scenarios has been measured. The results are positive, obtaining converging estimations both with new and aged batteries, with accurate outputs for the intended purpose.


Author(s):  
Anthony Nguyen ◽  
Shubhra Upadhyay ◽  
Muhammad Ali Javaid ◽  
Abdul Moiz Qureshi ◽  
Shahan Haseeb ◽  
...  

Background: Behcet’s Disease (BD) is a complex inflammatory vascular disorder that follows a relapsing-remitting course with diverse clinical manifestations. The prevalence of the disease varies throughout the globe and targets different age groups. There are many variations of BD, however, intestinal BD is not only more common but has many signs and symptoms. Summary: BD is a relapsing-remitting inflammatory vascular disorder with multiple system involvement, affecting vessels of all types and sizes that targets young adults. The etiology of BD is unknown but many factors including genetic mechanisms, vascular changes, hypercoagulability and dysregulation of immune function are believed to be responsible. BD usually presents with signs and symptoms of ulcerative disease of the small intestine; endoscopy being consistent with the clinical manifestations. The mainstay of treatment depends upon the severity of the disease. Corticosteroids are recommended for severe forms of the disease and aminosalicylic acids are used in maintaining remission in mild to moderate forms of the disease. Key messages: In this review, we have tried to summarize in the present review the clinical manifestations, differential diagnoses and management of intestinal BD. Hopefully, this review will enable health policymakers to ponder over establishing clear endpoints for treatment, surveillance investigations and creating robust algorithms.


2021 ◽  
Author(s):  
Rui Wang ◽  
Yi Wang ◽  
Yanping Li ◽  
Wenming Cao

Abstract In this paper, two new geometric algebra (GA) based adaptive filtering algorithms in non-Gaussian environment are proposed, which are deduced from the robust algorithms based on the minimum error entropy (MEE) criterion and the joint criterion of the MEE and the mean square error (MSE) with the help of GA theory. Some experiments validate the effectiveness and superiority of the GA-MEE and GA-MSEMEE algorithms in α-stable noise environment. At the same time, the GA-MSEMEE algorithm has faster convergence speed compared with the GA-MEE.


2021 ◽  
Vol 2 (4) ◽  
pp. 1-28
Author(s):  
Anderson Bessa Da Costa ◽  
Larissa Moreira ◽  
Daniel Ciampi De Andrade ◽  
Adriano Veloso ◽  
Nivio Ziviani

Modeling from data usually has two distinct facets: building sound explanatory models or creating powerful predictive models for a system or phenomenon. Most of recent literature does not exploit the relationship between explanation and prediction while learning models from data. Recent algorithms are not taking advantage of the fact that many phenomena are actually defined by diverse sub-populations and local structures, and thus there are many possible predictive models providing contrasting interpretations or competing explanations for the same phenomenon. In this article, we propose to explore a complementary link between explanation and prediction. Our main intuition is that models having their decisions explained by the same factors are likely to perform better predictions for data points within the same local structures. We evaluate our methodology to model the evolution of pain relief in patients suffering from chronic pain under usual guideline-based treatment. The ensembles generated using our framework are compared with all-in-one approaches of robust algorithms to high-dimensional data, such as Random Forests and XGBoost. Chronic pain can be primary or secondary to diseases. Its symptomatology can be classified as nociceptive, nociplastic, or neuropathic, and is generally associated with many different causal structures, challenging typical modeling methodologies. Our data includes 631 patients receiving pain treatment. We considered 338 features providing information about pain sensation, socioeconomic status, and prescribed treatments. Our goal is to predict, using data from the first consultation only, if the patient will be successful in treatment for chronic pain relief. As a result of this work, we were able to build ensembles that are able to consistently improve performance by up to 33% when compared to models trained using all the available features. We also obtained relevant gains in interpretability, with resulting ensembles using only 15% of the total number of features. We show we can effectively generate ensembles from competing explanations, promoting diversity in ensemble learning and leading to significant gains in accuracy by enforcing a stable scenario in which models that are dissimilar in terms of their predictions are also dissimilar in terms of their explanation factors.


Author(s):  
К.В. Шаталов

Разработаны новые робастные алгоритмы обработки результатов многократных измерений состава и свойств нефтепродуктов, учитывающие тот факт, что эмпирическая функция распределения результатов измерений состава и свойств нефтепродуктов представляет собой смесь двух нормальных распределений с разными значениями параметров положения и масштаба. В случае измерений состава и свойств нефтепродуктов в качестве робастных оценок параметра положения и параметра масштаба выборки предложено использовать М-оценки с предварительным масштабированием на основе модифицированной функции Хампеля. Для нахождения М-оценки предложены два итеративных способа вычисления на основе средневзвешенного метода наименьших квадратов, отличающиеся процедурами расчета начальных оценок параметров положения и масштаба выборки. При числе результатов в выборке более двадцати в качестве начальных значений параметров положения и масштаба целесообразно использовать α‑урезанное среднее и α‑урезанное стандартное отклонение с долей усечения 0,05. При числе результатов в выборке менее двадцати в качестве начальных значений параметра положения и параметра масштаба обоснованно использование робастных оценок, не требующих удаления части данных. В качестве начальной оценки параметра положения предложено использовать оценку Ходжеса – Лемана; в качестве параметра масштаба – медианы абсолютных разностей. Предложенные робастные алгоритмы могут быть использованы при обработке результатов эксперимента по определению показателей прецизионности, правильности и точности методик измерений состава и свойств нефтепродуктов, итогов межлабораторных сравнительных испытаний нефтепродуктов, расчете аттестованного значения стандартных образцов состава и свойств нефтепродуктов, а также в других случаях многократных наблюдений. New robust algorithms of treatment of the results of multiple measurements of composition and properties of petroleum products were developed in respect that empirical distribution function of the results of measurements of composition and properties of petroleum products are the mixture of two normal distributions with different values of position and scale parameters. In case of measurements of composition and properties of petroleum products it has been proposed to use M-estimator with pre-scaling based on modified Hampel function as robust estimators of position and scale parameters. To calculation M-estimator two iterative methods based on weighted average method of least squares were suggested which differs by procedures of initial estimators of position and scale parameters of sample. In case of more than twenty results in sample, it is expedient to apply α-truncated mean and α-truncated standard deviation with 0,05 truncation share as initial values of position and scale parameters. In case of less than twenty results in sample, it is reasonable to apply robust estimators as initial values of position and scale parameters, which don’t require removal of some part of the data. It was proposed to use Hodges-Lehmann estimator as an initial value of position parameter and median of absolute differences as a scale parameter. The proposed robust algorithms can be used in treatment of experiment results on determination of indexes of precision, trueness and accuracy of the methods of measurement of composition and properties of petroleum products; results of interlaboratory comparison tests of petroleum products; calculation of certified value of standard samples of composition and properties of petroleum products and in other cases of multiple observations.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2482
Author(s):  
Gerardo Alfonso Perez ◽  
Javier Caballero Villarraso

A nonlinear approach to identifying combinations of CpGs DNA methylation data, as biomarkers for Alzheimer (AD) disease, is presented in this paper. It will be shown that the presented algorithm can substantially reduce the amount of CpGs used while generating forecasts that are more accurate than using all the CpGs available. It is assumed that the process, in principle, can be non-linear; hence, a non-linear approach might be more appropriate. The proposed algorithm selects which CpGs to use as input data in a classification problem that tries to distinguish between patients suffering from AD and healthy control individuals. This type of classification problem is suitable for techniques, such as support vector machines. The algorithm was used both at a single dataset level, as well as using multiple datasets. Developing robust algorithms for multi-datasets is challenging, due to the impact that small differences in laboratory procedures have in the obtained data. The approach that was followed in the paper can be expanded to multiple datasets, allowing for a gradual more granular understanding of the underlying process. A 92% successful classification rate was obtained, using the proposed method, which is a higher value than the result obtained using all the CpGs available. This is likely due to the reduction in the dimensionality of the data obtained by the algorithm that, in turn, helps to reduce the risk of reaching a local minima.


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2394
Author(s):  
Kang-Ping Lu ◽  
Shao-Tung Chang

Regression models with change-points have been widely applied in various fields. Most methodologies for change-point regressions assume Gaussian errors. For many real data having longer-than-normal tails or atypical observations, the use of normal errors may unduly affect the fit of change-point regression models. This paper proposes two robust algorithms called EMT and FCT for change-point regressions by incorporating the t-distribution with the expectation and maximization algorithm and the fuzzy classification procedure, respectively. For better resistance to high leverage outliers, we introduce a modified version of the proposed method, which fits the t change-point regression model to the data after moderately pruning high leverage points. The selection of the degrees of freedom is discussed. The robustness properties of the proposed methods are also analyzed and validated. Simulation studies show the effectiveness and resistance of the proposed methods against outliers and heavy-tailed distributions. Extensive experiments demonstrate the preference of the t-based approach over normal-based methods for better robustness and computational efficiency. EMT and FCT generally work well, and FCT always performs better for less biased estimates, especially in cases of data contamination. Real examples show the need and the practicability of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document