Wavelet estimation in OFBM: Choosing scale parameter in different sampling methods and different parameter values

2020 ◽  
Vol 166 ◽  
pp. 108877
Author(s):  
Jeonghwa Lee
2020 ◽  
Vol 9 (1) ◽  
pp. 189-203
Author(s):  
Abbas Eftekharian ◽  
Mostafa Razmkhah ◽  
Jafar Ahmadi

A flexible ranked set sampling scheme including some various existing sampling methods  is proposed. This scheme may be used to minimize the  error of ranking and the cost of sampling. Based on the data obtained from this scheme, the maximum likelihood estimation as well as the Fisher information are studied for the  scale family of distributions. The existence and uniqueness of  the  maximum likelihood estimator  of the scale parameter of the exponential  and  normal distributions are  investigated. Moreover, the optimal scheme is derived via simulation and numerical computations.


Atmosphere ◽  
2021 ◽  
Vol 12 (7) ◽  
pp. 897
Author(s):  
J. Agustín García ◽  
Mario M. Pizarro ◽  
F. Javier Acero ◽  
M. Isabel Parra

A Bayesian hierarchical framework with a Gaussian copula and a generalized extreme value (GEV) marginal distribution is proposed for the description of spatial dependencies in data. This spatial copula model was applied to extreme summer temperatures over the Extremadura Region, in the southwest of Spain, during the period 1980–2015, and compared with the spatial noncopula model. The Bayesian hierarchical model was implemented with a Monte Carlo Markov Chain (MCMC) method that allows the distribution of the model’s parameters to be estimated. The results show the GEV distribution’s shape parameter to take constant negative values, the location parameter to be altitude dependent, and the scale parameter values to be concentrated around the same value throughout the region. Further, the spatial copula model chosen presents lower deviance information criterion (DIC) values when spatial distributions are assumed for the GEV distribution’s location and scale parameters than when the scale parameter is taken to be constant over the region.


Author(s):  
Badrinath Roysam ◽  
Hakan Ancin ◽  
Douglas E. Becker ◽  
Robert W. Mackin ◽  
Matthew M. Chestnut ◽  
...  

This paper summarizes recent advances made by this group in the automated three-dimensional (3-D) image analysis of cytological specimens that are much thicker than the depth of field, and much wider than the field of view of the microscope. The imaging of thick samples is motivated by the need to sample large volumes of tissue rapidly, make more accurate measurements than possible with 2-D sampling, and also to perform analysis in a manner that preserves the relative locations and 3-D structures of the cells. The motivation to study specimens much wider than the field of view arises when measurements and insights at the tissue, rather than the cell level are needed.The term “analysis” indicates a activities ranging from cell counting, neuron tracing, cell morphometry, measurement of tracers, through characterization of large populations of cells with regard to higher-level tissue organization by detecting patterns such as 3-D spatial clustering, the presence of subpopulations, and their relationships to each other. Of even more interest are changes in these parameters as a function of development, and as a reaction to external stimuli. There is a widespread need to measure structural changes in tissue caused by toxins, physiologic states, biochemicals, aging, development, and electrochemical or physical stimuli. These agents could affect the number of cells per unit volume of tissue, cell volume and shape, and cause structural changes in individual cells, inter-connections, or subtle changes in higher-level tissue architecture. It is important to process large intact volumes of tissue to achieve adequate sampling and sensitivity to subtle changes. It is desirable to perform such studies rapidly, with utmost automation, and at minimal cost. Automated 3-D image analysis methods offer unique advantages and opportunities, without making simplifying assumptions of tissue uniformity, unlike random sampling methods such as stereology.12 Although stereological methods are known to be statistically unbiased, they may not be statistically efficient. Another disadvantage of sampling methods is the lack of full visual confirmation - an attractive feature of image analysis based methods.


1987 ◽  
Vol 26 (06) ◽  
pp. 248-252 ◽  
Author(s):  
M. J. van Eenige ◽  
F. C. Visser ◽  
A. J. P. Karreman ◽  
C. M. B. Duwel ◽  
G. Westera ◽  
...  

Optimal fitting of a myocardial time-activity curve is accomplished with a monoexponential plus a constant, resulting in three parameters: amplitude and half-time of the monoexponential and the constant. The aim of this study was to estimate the precision of the calculated parameters. The variability of the parameter values as a function of the acquisition time was studied in 11 patients with cardiac complaints. Of the three parameters the half-time value varied most strongly with the acquisition time. An acquisition time of 80 min was needed to keep the standard deviation of the half-time value within ±10%. To estimate the standard deviation of the half-time value as a function of the parameter values, of the noise content of the time-activity curve and of the acquisition time, a model experiment was used. In most cases the SD decreased by 50% if the acquisition time was increased from 60 to 90 min. A low amplitude/constant ratio and a high half-time value result in a high SD of the half-time value. Tables are presented to estimate the SD in a particular case.


2017 ◽  
Vol 24 (1) ◽  
pp. 35-53
Author(s):  
Sulastiningsih Sulastiningsih ◽  
Intan Ayu Candra

The purpose of this study is to prove: (1) Time pressure, locus of control, the action of supervision and materiality partially affect the premature termination of the audit procedures (2) Time pressure, locus of control, supervision and materiality simultaneously affect the premature termination on the audit procedures. This research was conducted in Public Accountant firm in Yogyakarta region of which total 12 samples of KAP, by distributing 105 questionnaires, and 57 questionnaires were returned (54%). 34 of the returned questionnaires can be processed (34%). The samples in this study were determined by using non-probability sampling, one of purposive sampling methods. Data analysis consisted of: (1) validity test, reliability test and classical assumption. The result showed that the instruments used are quite reliable and valid (2) multiple linear regression analysis. The results are (a) Some of independent variables partially affect premature termination of the audit procedure, while the action of supervision does not influence premature termination of audit procedures (b) All independent variables influence simultaneously to the premature termination of the audit procedures (c) All independent variables showed that as much as 55% it affects on premature termination of the audit procedures, the rest of it are influenced by other variables. (3) Friedman Test. The result shows that there are order of priority of audit procedures being terminated.


2000 ◽  
Author(s):  
Z. Bai ◽  
J. Zhang ◽  
G. Rhoads ◽  
P. Lioy ◽  
S. Tsai ◽  
...  
Keyword(s):  

1999 ◽  
Author(s):  
N. McCullough ◽  
L. Brosseau ◽  
C. Pilon ◽  
D. Vesley
Keyword(s):  

Author(s):  
Yaniv Aspis ◽  
Krysia Broda ◽  
Alessandra Russo ◽  
Jorge Lobo

We introduce a novel approach for the computation of stable and supported models of normal logic programs in continuous vector spaces by a gradient-based search method. Specifically, the application of the immediate consequence operator of a program reduct can be computed in a vector space. To do this, Herbrand interpretations of a propositional program are embedded as 0-1 vectors in $\mathbb{R}^N$ and program reducts are represented as matrices in $\mathbb{R}^{N \times N}$. Using these representations we prove that the underlying semantics of a normal logic program is captured through matrix multiplication and a differentiable operation. As supported and stable models of a normal logic program can now be seen as fixed points in a continuous space, non-monotonic deduction can be performed using an optimisation process such as Newton's method. We report the results of several experiments using synthetically generated programs that demonstrate the feasibility of the approach and highlight how different parameter values can affect the behaviour of the system.


Sign in / Sign up

Export Citation Format

Share Document