scholarly journals Multiple Comparisons for Exponential Median Lifetimes with the Control Based on Doubly Censored Samples

Mathematics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 76
Author(s):  
Shu-Fei Wu

Under doubly censoring, the one-stage multiple comparison procedures with the control in terms of exponential median lifetimes are presented. The uniformly minimum variance unbiased estimator for median lifetime is found. The upper bounds, lower bounds and two-sided confidence intervals for the difference between each median lifetimes and the median lifetime of the control population are developed. Statistical tables of critical values are constructed for the practical use of our proposed procedures. Users can use these simultaneous confidence intervals to determine whether the performance of treatment populations is better than or worse than the control population in agriculture and pharmaceutical industries. At last, one practical example is provided to illustrate the proposed procedures.

2007 ◽  
Vol 15 (1) ◽  
pp. 41-72
Author(s):  
Won Cheol Yun

This study empirically compares the hedging performances of the newly listed Japanese yen (JPY) and European euro (EUR) currency futures in the KRX relative to that of the us dollar (USD) currency futures. For this purpose, assuming the situation of foreign-asset investment the minimum variance hedging models based on OLS and ECM are compared with a simple 1: 1 hedge. The difference between previous studies and this one is in that the latter uses various kinds of hedging performance measures and analyzes the hedging performances by different hedging horizon. According to the empirical results, the USD currency futures outperforms the JPY and EUR currency futures when considering the risk only. However, the results are reversed wilen incorporating the return as well as the risk. With respect to the comparative advantages among hedging types, the ECM-hedge turns out to be better than the others for evaluating the risk only, and the 1: 1 hedge proves to be superior to the others when considering both of the return and risk aspects. Based on the risk-reduction aspect. the hedging performances are gradually improving as the length of hedging period increases, while they deteriorate for considering both the return and risk aspects.


2013 ◽  
Vol 321-324 ◽  
pp. 561-567 ◽  
Author(s):  
Ming Xiao ◽  
Liang Pan ◽  
Yan Long Bu ◽  
Li Li Wan

The traditional Kalman filter is able to obtain the optimal estimation of the estimated signals. However, it fails to consider their reliability. In real applications, the estimated signals may include outliers. Fortunately, we are able to know the reliability of the signals transcendentally. In this paper, we derive the one-dimensional data fusion formulas based on signals reliability which is according to minimum variance restriction. Furthermore, a corresponding data fusion scheme is proposed. Experimental results show the propose data fusion method performs much better than traditional methods.


2010 ◽  
Vol 138 (6) ◽  
pp. 2174-2187 ◽  
Author(s):  
Lucas M. Harris ◽  
Dale R. Durran

Abstract Most mesoscale models can be run with either one-way (parasitic) or two-way (interactive) grid nesting. This paper presents results from a linear 1D shallow-water model to determine whether the choice of nesting method can have a significant impact on the solution. Two-way nesting was found to be generally superior to one-way nesting. The only situation in which one-way nesting performs better than two-way is when very poorly resolved waves strike the nest boundary. A simple filter is proposed for use exclusively on the coarse-grid values within the sponge zone of an otherwise conventional sponge boundary condition (BC). The two-way filtered sponge BC gives better results than any of the other methods considered in these tests. Results for all wavelengths were found to be robust to other changes in the formulation of the sponge boundary, particularly with the width of the sponge layer. The increased reflection for longer-wavelength disturbances in the one-way case is due to a phase difference between the coarse- and nested-grid solutions at the nested-grid boundary that accumulates because of the difference in numerical phase speeds between the grids. Reflections for two-way nesting may be estimated from the difference in numerical group velocities between the coarse and nested grids, which only becomes large for waves that are poorly resolved on the coarse grid.


2019 ◽  
Author(s):  
Wen Li Zhao ◽  
Yu Jiu Xiong ◽  
Kyaw Tha Paw U ◽  
Pierre Gentine ◽  
Baoyu Chen ◽  
...  

Abstract. Quantifying the uncertainties induced by resistance parameterization is fundamental to understanding, improving, and developing terrestrial evapotranspiration (ET) models. Using high-density eddy covariance (EC) tower observations in a heterogeneous oasis in Northwest China, this study evaluates the impact of resistances on latent heat flux (LE) estimations, the energy equivalent of ET, by comparing resistance parameterizations with varied complexity under one- and two-source Penman-Monteith (PM) equations. We then discuss possible solutions for reducing such uncertainties by employing a three-temperature (3T) model, which does not explicitly include resistance-related parameters. The results show that the mean absolute percent error (MAPE) varied from 32 % to 39 % for the LE estimates from the one- and two-source PM equations. When only surface resistance (rs) was parameterized under the one-source network, then the uncertainty (defined as the difference between MAPEs) dropped to 12 %. When both rs and aerodynamic resistance (ra) were parameterized differently under the one- and two-source networks, then the uncertainties in the estimates were 11~23 %, emphasizing that multiple resistances add uncertainties. Additionally, the 3T model performed better than the PM equations, with MAPE of 19 %. The results suggest that 1) although prior calibration of the parameters required in resistance estimations can improve the PM-based LE estimates, resistance parameterization process can generate obvious uncertainties, 2) more complex resistance parameterizations leads to more uncertainty in the LE estimation, and 3) the relatively simple 3T model avoids resistance parameterization, thus introducing less uncertainty in the LE estimation.


2019 ◽  
Vol 48 (4) ◽  
pp. 58-89
Author(s):  
Ajit Chaturvedi ◽  
Suk-Bok Kang ◽  
Ananya Malhotra

We consider two measures of reliability functions namely R(t)=P(X>t) and P=P(X>Y) for the Moore and Bilikam (1978) family of lifetime distributions which covers fourteen distributions as specific cases. For record data from this family of distributions, preliminary test estimators (PTEs) and preliminary test confidence interval (PTCI) based on uniformly minimum variance unbiased estimator (UMVUE), maximum likelihood estimator (MLE), empirical Bayes estimator (EBE) are obtained for the parameter. The bias and mean square error (MSE) (exact and asymptotic) of the proposed estimators are derived to study their relative efficiency and through simulation studies we establish that PTEs perform better than ordinary UMVUE, MLE and EBE. We also obtain the coverage probability (CP) and the expected length of the PTCI of the parameter and establish that the confidence intervals based on MLE are more precise. An application of the ordinary preliminary test estimator is also considered. To the best of the knowledge of the authors, no PTEs have been derived for R(t) and P based on records and thus we define improved PTEs based on MLE and UMVUE of R(t) and P. A comparative study of different methods of estimation done through simulations establishes that PTEs perform better than ordinary UMVUE and MLE.


2016 ◽  
Vol 2016 ◽  
pp. 1-14 ◽  
Author(s):  
Hai Nan ◽  
Bin Fang ◽  
Guixin Wang ◽  
Weibin Yang ◽  
Emily Sarah Carruthers ◽  
...  

War chess gaming has so far received insufficient attention but is a significant component of turn-based strategy games (TBS) and is studied in this paper. First, a common game model is proposed through various existing war chess types. Based on the model, we propose a theory frame involving combinational optimization on the one hand and game tree search on the other hand. We also discuss a key problem, namely, that the number of the branching factors of each turn in the game tree is huge. Then, we propose two algorithms for searching in one turn to solve the problem: (1) enumeration by order; (2) enumeration by recursion. The main difference between these two is the permutation method used: the former uses the dictionary sequence method, while the latter uses the recursive permutation method. Finally, we prove that both of these algorithms are optimal, and we analyze the difference between their efficiencies. An important factor is the total time taken for the unit to expand until it achieves its reachable position. The factor, which is the total number of expansions that each unit makes in its reachable position, is set. The conclusion proposed is in terms of this factor: Enumeration by recursion is better than enumeration by order in all situations.


Author(s):  
L. Chen ◽  
F. Rottensteiner ◽  
C. Heipke

In this paper we present several descriptors for feature-based matching based on autoencoders, and we evaluate the performance of these descriptors. In a training phase, we learn autoencoders from image patches extracted in local windows surrounding key points determined by the Difference of Gaussian extractor. In the matching phase, we construct key point descriptors based on the learned autoencoders, and we use these descriptors as the basis for local keypoint descriptor matching. Three types of descriptors based on autoencoders are presented. To evaluate the performance of these descriptors, recall and 1-precision curves are generated for different kinds of transformations, e.g. zoom and rotation, viewpoint change, using a standard benchmark data set. We compare the performance of these descriptors with the one achieved for SIFT. Early results presented in this paper show that, whereas SIFT in general performs better than the new descriptors, the descriptors based on autoencoders show some potential for feature based matching.


1993 ◽  
Vol 75 (5) ◽  
pp. 2003-2012 ◽  
Author(s):  
M. Modarreszadeh ◽  
K. S. Kump ◽  
H. J. Chizeck ◽  
D. W. Hudgel ◽  
E. N. Bruce

We have designed and implemented a computer-controlled system that uses an adaptive control algorithm (generalized minimum variance) to buffer the breath-by-breath variations of the end-tidal CO2 fraction (FETCO2) that occur spontaneously or are exaggerated in certain experimental protocols (e.g., induced hypoxia, any type of induced variations in the ventilatory pattern). Near the end of each breath, FETCO2 of the following breath is predicted and the inspired CO2 fraction (FICO2) of the upcoming breath is adjusted to minimize the difference between the predicted and desired FETCO2 of the next breath. The one-breath-ahead prediction of FETCO2 is based on an adaptive autoregressive with exogenous inputs (ARX) model: FETCO2 of a given breath is related to FICO2, FETCO2 of the previous breath, and inspiratory ventilation. Adequacy of the prediction is demonstrated using data from experiments in which FICO2 was varied pseudorandomly in wakefulness and sleep. The algorithm for optimally buffering changes in FETCO2 is based on the coefficients of the ARX model. We have determined experimentally the frequency of FETCO2 variations that can be buffered adequately by our controller, testing both spontaneous variations in FETCO2 and variations induced by hypoxia in young awake human subjects. The controller is most effective in buffering variations of FETCO2 in the frequency range of <0.1 cycle/breath. Some potential applications are discussed.


Author(s):  
John P. Langmore ◽  
Brian D. Athey

Although electron diffraction indicates better than 0.3nm preservation of biological structure in vitreous ice, the imaging of molecules in ice is limited by low contrast. Thus, low-dose images of frozen-hydrated molecules have significantly more noise than images of air-dried or negatively-stained molecules. We have addressed the question of the origins of this loss of contrast. One unavoidable effect is the reduction in scattering contrast between a molecule and the background. In effect, the difference in scattering power between a molecule and its background is 2-5 times less in a layer of ice than in vacuum or negative stain. A second, previously unrecognized, effect is the large, incoherent background of inelastic scattering from the ice. This background reduces both scattering and phase contrast by an additional factor of about 3, as shown in this paper. We have used energy filtration on the Zeiss EM902 in order to eliminate this second effect, and also increase scattering contrast in bright-field and dark-field.


Sign in / Sign up

Export Citation Format

Share Document