multidimensional distribution
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Vladimir Mikhailovich Levin ◽  
Ammar Abdulazez Yahya

The Bayesian classifier is a priori the optimal solution for minimizing the total error in problems of statistical pattern recognition. The article suggests using the classifier as a regular tool to increase the reliability of defect recognition in power oil-filled transformers based on the results of the analysis of gases dissolved in oil. The wide application of the Bayesian method for solving tasks of technical diagnostics of electrical equipment is limited by the problem of the multidimensional distribution of random parameters (features) and the nonlinearity of classification. The application of a generalized feature of a defect in the form of a nonlinear function of the transformer state parameters is proposed. This simultaneously reduces the dimension of the initial space of the controlled parameters and significantly improves the stochastic properties of the random distribution of the generalized feature. A special algorithm has been developed to perform statistical calculations and the procedure for recognizing the current technical condition of the transformer using the generated decision rule. The presented research results illustrate the possibility of the practical application of the developed method in the conditions of real operation of power transformers.


2022 ◽  
Vol 1 (1) ◽  
pp. 1
Author(s):  
Shuo Zhang ◽  
Baoguo Cai ◽  
Shimin Guan ◽  
Xixi Li ◽  
Shaofeng Rong ◽  
...  

2021 ◽  
Author(s):  
Jan Mudler ◽  
Andreas Hördt ◽  
Dennis Kreith ◽  
Kirill Bazhin ◽  
Lyudmila Lebedeva ◽  
...  

Abstract. The reliable detection of subsurface ice using non-destructive geophysical methods is an important objective in permafrost research. Furthermore, the ice content of the frozen ground is an essential parameter for further interpretation, for example in terms of risk analysis, e.g. for the description of permafrost carbon feedback by thawing processes. The High-Frequency Induced Polarization method (HFIP) enables the measurement of the frequency dependent electrical signal of the subsurface. In contrast to the well-established Electrical Resistivity Tomography (ERT), the usage of the full spectral information provides additional physical parameters of the ground. As the electrical properties of ice exhibit a strong characteristic behaviour in the frequency range between 100 Hz and 100 kHz, HFIP is in principle suitable to estimate ice content. Here, we present methodological advancements of the HFIP method and suggest an explicit procedure for ice content estimation. A new measuring device, the Chameleon-II (Radic Research), was used for the first time. It was designed for the application of Spectral Induced Polarization over a wide frequency range and is usable under challenging conditions, for example in field sites under periglacial influence and the presence of permafrost. Amongst other improvements, compared to a previous generation, the new system is equipped with longer cables and larger power, such that we can now achieve larger penetration depths up to 10 m. Moreover, it is equipped with technology to reduce electromagnetic coupling effects which can distort the desired subsurface signal. The second development is a method to estimate ice content quantitatively from five Cole-Cole parameters obtained from spectral two-dimensional inversion results. The method is based on a description of the subsurface as a mixture of two components (matrix and ice) and uses a previously suggested relationship between frequency-dependent electrical permittivity and ice content. Measurements on a permafrost site near Yakutsk, Russia, were carried out to test the entire procedure under real conditions at the field scale. We demonstrate that the spectral signal of ice can clearly be identified even in the raw data, and show that the spectral 2-D inversion algorithm is suitable to obtain the multidimensional distribution of electrical parameters. The parameter distribution and the estimated ice content agree reasonably well with previous knowledge of the field site from borehole and geophysical investigations. We conclude that the method is able to provide quantitative ice content estimates, and that relationships that have been tested in the laboratory may be applied at the field scale.


Author(s):  
Fabrizio Angiulli ◽  
Fabio Fassetti

Abstract Enabling information systems to face anomalies in the presence of uncertainty is a compelling and challenging task. In this work the problem of unsupervised outlier detection in large collections of data objects modeled by means of arbitrary multidimensional probability density functions is considered. We present a novel definition of uncertain distance-based outlier under the attribute level uncertainty model, according to which an uncertain object is an object that always exists but its actual value is modeled by a multivariate pdf. According to this definition an uncertain object is declared to be an outlier on the basis of the expected number of its neighbors in the dataset. To the best of our knowledge this is the first work that considers the unsupervised outlier detection problem on data objects modeled by means of arbitrarily shaped multidimensional distribution functions. We present the UDBOD algorithm which efficiently detects the outliers in an input uncertain dataset by taking advantages of three optimized phases, that are parameter estimation, candidate selection, and the candidate filtering. An experimental campaign is presented, including a sensitivity analysis, a study of the effectiveness of the technique, a comparison with related algorithms, also in presence of high dimensional data, and a discussion about the behavior of our technique in real case scenarios.


2020 ◽  
Vol 498 (3) ◽  
pp. 4365-4378
Author(s):  
Tsutomu T Takeuchi ◽  
Kai T Kono

ABSTRACT The need for a method to construct multidimensional distribution function is increasing recently, in the era of huge multiwavelength surveys. We have proposed a systematic method to build a bivariate luminosity or mass function of galaxies by using a copula. It allows us to construct a distribution function when only its marginal distributions are known, and we have to estimate the dependence structure from data. A typical example is the situation that we have univariate luminosity functions at some wavelengths for a survey, but the joint distribution is unknown. Main limitation of the copula method is that it is not easy to extend a joint function to higher dimensions (d > 2), except some special cases like multidimensional Gaussian. Even if we find such a multivariate analytic function in some fortunate case, it would often be inflexible and impractical. In this work, we show a systematic method to extend the copula method to unlimitedly higher dimensions by a vine copula. This is based on the pair-copula decomposition of a general multivariate distribution. We show how the vine copula construction is flexible and extendable. We also present an example of the construction of a stellar mass–atomic gas–molecular gas three-dimensional mass function. We demonstrate the maximum likelihood estimation of the best functional form for this function, as well as a proper model selection via vine copula.


2020 ◽  
Vol 14 (2) ◽  
pp. 2742-2772
Author(s):  
François Bachoc ◽  
Alexandra Suvorikova ◽  
David Ginsbourger ◽  
Jean-Michel Loubes ◽  
Vladimir Spokoiny

2019 ◽  
Vol 8 (6) ◽  
Author(s):  
Alexander K. Rozentsvaig ◽  
Aleksej G. Isavnin ◽  
Anton N. Karamyshev

In economics, the general theory is largely descriptive, and mathematical models are not only statistical but also partial. Therefore, an economic phenomenon usually requires using partial methods and getting only private solutions limited by particular conditions - the type of activity, its place and time of implementation. The real idea of the nature of the economic phenomenon that interests us is given only by statistical data. Correlation analysis is a time-consuming and completely non-formalizable task when it is necessary to justify the relationship structure of a large number of factors. In addition, the quality and interpretation of the results of statistical analysis are predetermined by the nature of the statistical models used to obtain sample estimates of their parameters. Due to the complexity of multidimensional statistical models, general theoretical concepts are usually limited by the assumption that the sampled data does not contradict the normal multidimensional distribution law. This greatly simplifies multivariate statistical analysis and therefore it always leads to linear regression relationships, which corresponds to a trivial system of correlation relationships and is rarely observed in reality. The structure of each economic object is unique, therefore, it is proposed to refine it using a system of correlation matrices of various orders. It is shown that the generalization of large volumes of multidimensional sample data in the form of “portraits” of correlation matrices clearly represents the specific features of the object of study. Moreover, the empirical system of statistically significant relationships is transformed into the corresponding model of economic relationships. Prerequisites are being created for the practical use of universal systems analysis methods based on modern theoretical and software tools of information technologies


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. R741-R751 ◽  
Author(s):  
Zhen-Dong Zhang ◽  
Tariq Alkhalifah

Obtaining high-resolution models of the earth, especially around the reservoir, is crucial to properly image and interpret the subsurface. We have developed a regularized elastic full-waveform inversion (FWI) method that uses facies as the prior information. Deep neural networks (DNNs) are trained to estimate the distribution of facies in the subsurface. Here, we use facies extracted from wells as the prior information. Seismic data, well logs, and interpreted facies have different resolution and illumination to the subsurface. Besides, a physical process, such as anelasticity in the subsurface, is often too complicated to be fully considered. Therefore, there are often no explicit formulas to connect the data coming from different geophysical surveys. A deep-learning method can find the statistically correct connection without the need to know the complex physics. In our deep-learning scheme, we specifically use it to assist the inverse problem instead of the widely used labeling task. First, we conduct an adaptive data-selection elastic FWI using the observed seismic data and obtain estimates of the subsurface, which do not need to be perfect. Then, we use the extracted facies information from the wells and force the estimated model to fit the facies by training DNNs. In this way, a list of facies is mapped to a 2D or 3D inverted model guided mainly by the structure features of the model. The multidimensional distribution of facies is used either as a regularization term or as an initial model for the next waveform inversion. Our method has two main features: (1) It applies to any kind of distribution of data samples and (2) it interpolates facies between wells guided by the structure of the estimated models. Results with synthetic and field data illustrate the benefits and limitations of this method.


2019 ◽  
pp. 36-42
Author(s):  
V. Ignatkin

The article discusses metrological reliability of measuring equipment (ME), argues that ME imprecision must be considered not in statics, 'out in dynamics, taking into account the change of its characteristics over time. Measurement imprecision and its components are considered as random processes that are fully characterized by multidimensional distribution. It is advisable to determine the probability of metrological measurements directly from the experiment due to the difficulties of analytical solution to the problem. The characteristics of dynamic imprecision depend on both the values of the measured object and the ME properties. The physical cause of dynamic imprecision taking place is inertia of ME, its exhaustive description relies on the use of Duamel integral, which determines the response of inertial link to the input influence. As a criterion for signal differences one can use quite different functionals, taking into account further use of measurement results, the convenience of computing, the properties or input influences, and so on. It is most expedient to use the dispersion of signal differences. To calculate the parameters of dynamic imprecision it is necessary to know the energy spectrum of the input signal. The given ratios can be used for both stationary and non-stationary processes. The paper provides examples of using these ratios, recommendations for reducing measurement errors in each particular case.


Sign in / Sign up

Export Citation Format

Share Document