Sample size, position, and structure effects on magnetization measurements using second-order gradiometer pickup coils

2006 ◽  
Vol 77 (1) ◽  
pp. 015106 ◽  
Author(s):  
P. Stamenov ◽  
J. M. D. Coey
2021 ◽  
Vol 4 (2) ◽  
Author(s):  
Anton Iehorovych Bereznytskyi

In order to determine the technical condition of energetic objects with the objective of ensuring their operational reliability, durability and safety, systems of noise diagnostics, which are based on the analysis of acoustic diagnostic signals. A promising area of noise diagnostics are cumulant methods, based on cumulant analysis, which involves the use of cumulants and cumulant coefficients. In known literature no characteristics of detection of a signal within an interference-containing additive mixture with the use of a second-order cumulant (variance) can be found. That is why the objective of the paper is to study the use of cumulant method on the basis of point estimations of variance for a sample of momentary values for detection of an acoustic signal against the background of noise interference. The research was carried out by way of modeling the additive mixture of signal and interference using the MATLAB® software package. Interference is a model of a noise acoustic signal, which accompanies the operation of properly functional equipment. Signal is a model of an acoustic signal which is created with the occurrence of a malfunction. Signal and interference are independent random variables, so the property of additivity of cumulants was used – the variance of a mixture equals the sum of variances of signal and interference. The decision about the presence of a signal was made on the basis of testing two statistical hypotheses. The null hypothesis – the signal is absent, variance equals to the variance of the interference. The first hypothesis – the signal is present, variance equals to the variance of the mixture. Additional parameters: probability of a Type I error 0,01, probability of correct determination 0,99. The relative error of estimation determined the minimal sample size. These values allowed for the calculation of the threshold value, upon the exceeding of which by the variance estimation, the decision on the presence of signal is made. For each sample, assessments of variance were made. Experimental probability of correct determination is calculated as a total number of decisions taken regarding the presence of a signal, divided by the number of realizations, and corresponds to the value of the specified probability of correct determination. Its relative error was calculated in order to control the validity of the results. Also, kernel density estimation of the probability of the variance assessment for the case of a signal with normal distribution. As shown by the graphs, the assessments have a distribution that is close to normal. The conducted study proves that a variance -based cumulant method allows to detect a signal against the background of noise interference. The necessary sample size, which shows the number of the necessary momentary values, is given in the paper. That is to say that with the help of the frequency of an analogue digital converter the needed duration of the recording of a real for assessment of its variance can be obtained, and the decision on the presence or absence of a signal is to be made on the basis of the specified threshold values. The results of the study can be added to the known sample sizes and threshold values for the coefficients of asymmetry and excess with different distributions. Application of the described method requires additional testing on real acoustic signals and has the areas of use in systems of noise diagnostics.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1151
Author(s):  
Gerd Christoph ◽  
Vladimir V. Ulyanov

We consider high-dimension low-sample-size data taken from the standard multivariate normal distribution under assumption that dimension is a random variable. The second order Chebyshev–Edgeworth expansions for distributions of an angle between two sample observations and corresponding sample correlation coefficient are constructed with error bounds. Depending on the type of normalization, we get three different limit distributions: Normal, Student’s t-, or Laplace distributions. The paper continues studies of the authors on approximation of statistics for random size samples.


2013 ◽  
Vol 30 (03) ◽  
pp. 1340002 ◽  
Author(s):  
HAILIN SUN ◽  
HUIFU XU ◽  
YONG WANG

In this paper, we propose a smoothing penalized sample average approximation (SAA) method for solving a stochastic minimization problem with second-order dominance constraints. The basic idea is to use sample average to approximate the expected values of the underlying random functions and then reformulate the discretized problem as an ordinary nonlinear programming problem with finite number of constraints. An exact penalty function method is proposed to deal with the latter and an elementary smoothing technique is used to tackle the nonsmoothness of the plus function and the exact penalty function. We investigate the convergence of the optimal value obtained from solving the smoothed penalized sample average approximation problem as sample size increases and show that with probability approaching to one at exponential rate with the increase of sample size the optimal value converges to its true counterpart. Some preliminary numerical results are reported.


2021 ◽  
Vol 4 (3) ◽  
pp. 47-63
Author(s):  
Owhondah P.S. ◽  
Enegesele D. ◽  
Biu O.E. ◽  
Wokoma D.S.A.

The study deals with discriminating between the second-order models with/without interaction on central tendency estimation using the ordinary least square (OLS) method for the estimation of the model parameters. The paper considered two different sets of data (small and large) sample size. The small sample size used data of unemployment rate as a response, inflation rate and exchange rate as the predictors from 2007 to 2018 and the large sample size was data of flow-rate on hydrate formation for Niger Delta deep offshore field. The〖 R〗^2, AIC, SBC, and SSE were computed for both data sets to test for adequacy of the models. The results show that all three models are similar for smaller data set while for large data set the second-order model centered on the median with/without interaction is the best base on the number of significant parameters. The model’s selection criterion values (R^2, AIC, SBC, and SSE) were found to be equal for models centered on median and mode for both large and small data sets. However, the model centered on median and mode with/without interaction were better than the model centered on the mean for large data sets. This study shows that the second-order regression model centered on median and mode are better than the model centered on the mean for large data set, while they are similar for smaller data set. Hence, the second-order regression model centered on median and mode with or without interaction are better than the second-order regression model centered on the mean.


2018 ◽  
Vol 61 (4) ◽  
pp. 1701-1717
Author(s):  
Toyoto Tanaka ◽  
Yoshihiro Hirose ◽  
Fumiyasu Komaki

Author(s):  
Gerd Christoph ◽  
Vladimir V. Ulyanov ◽  
Vladimir E. Bening

Author(s):  
W. L. Bell

Disappearance voltages for second order reflections can be determined experimentally in a variety of ways. The more subjective methods, such as Kikuchi line disappearance and bend contour imaging, involve comparing a series of diffraction patterns or micrographs taken at intervals throughout the disappearance range and selecting that voltage which gives the strongest disappearance effect. The estimated accuracies of these methods are both to within 10 kV, or about 2-4%, of the true disappearance voltage, which is quite sufficient for using these voltages in further calculations. However, it is the necessity of determining this information by comparisons of exposed plates rather than while operating the microscope that detracts from the immediate usefulness of these methods if there is reason to perform experiments at an unknown disappearance voltage.The convergent beam technique for determining the disappearance voltage has been found to be a highly objective method when it is applicable, i.e. when reasonable crystal perfection exists and an area of uniform thickness can be found. The criterion for determining this voltage is that the central maximum disappear from the rocking curve for the second order spot.


Sign in / Sign up

Export Citation Format

Share Document