scholarly journals On The Estimation of Means and Variances In The Case Of Unequal Components

1975 ◽  
Vol 8 (3) ◽  
pp. 323-335 ◽  
Author(s):  
Esa Hovinen

In practical statistical work one frequently meets certain problems. For instance, we may have the following data about loss ratios in certain insurance companies and corresponding, numbers of insurance in force:I assume further that we have no reason to believe that the companies, their loss ratios and their structure of insurances in force differ in any other way than by the size of companies. The problem is how to get quick estimates of mean losses and their variances in different companies?A straitforward way to estimate the mean loss ratio would be to compute the usual mean of numbers pi, (Σpi/6) = 14; its standard deviation is 6,5. As this procedure of the “first statistician” seems to be too simple and naive, a “more cautious” statistician would compute the weighted mean loss ratioThe “more cautious” statistician would arque that his result is much better than the other result 14. But what would be the variance of the estimate 7,1, and what is the variance in the different companies?

1983 ◽  
Vol 104 ◽  
pp. 185-186
Author(s):  
M. Kalinkov ◽  
K. Stavrev ◽  
I. Kuneva

An attempt is made to establish the membership of Abell clusters in superclusters of galaxies. The relation is used to calibrate the distances to the clusters of galaxies with two redshift estimates. One is m10, the magnitude of the ten-ranked galaxy, and the other is the “mean population,” P, defined by: where p = 40, 65, 105 … galaxies for richness groups 0, 1, 2 …, and r is the apparent radius in degrees given by: The first iteration for redshift, z1, is obtained from m10 alone: The standard deviation for Eq. (1) is 0.105, the number of clusters with known velocities is 342 and the correlation coefficient between observed and fitted values is 0.921. With zi from Eq. (1), we define Cartesian galactic coordinates Xi = Rih−1 cosBi cosLi, Yi = Rih−1 cosBi sinLi, Zi = Rih−1 sinBi for each Abell cluster, i = 1, …, 2712, where Ri is the distance to the cluster (Mpc), and Ho = 100 h km s−1 Mpc−1.


Energies ◽  
2021 ◽  
Vol 14 (3) ◽  
pp. 739
Author(s):  
Kamil Krasuski ◽  
Damian Wierzbicki

The paper presents a new concept of determining the resultant position of a UAV (Unmanned Aerial Vehicle) based on individual SBAS (Satellite-Based Augmentation System) determinations from all available EGNOS (European Geostationary Navigation Overlay Service) satellites for the SPP (Single Point Positioning) code method. To achieve this, the authors propose a weighted mean model to integrate EGNOS data. The weighted model was based on the inverse of the square of the mean position error along the component axes of the BLh ellipsoidal frame. The calculations included navigation data from the EGNOS S123, S126, S136 satellites. In turn, the resultant UAV position model was determined using the Scilab v.6.0.0 software. Based on the proposed computational strategy, the mean values of the UAV BLh coordinates’ standard deviation were better than 0.2 m (e.g., 0.0000018° = 0.01″ in angular measurement). Additionally, the numerical solution used made it possible to increase the UAV’s position accuracy by about 29% for Latitude, 46% for Longitude and 72% for ellipsoidal height compared to the standard SPP positioning in the GPS receiver. It is also worth noting that the standard deviation of the UAV position calculated from the weighted mean model improved by about 21 ÷ 50% compared to the arithmetic mean model’s solution. It can be concluded that the proposed research method allows for a significant improvement in the accuracy of UAV positioning with the use of EGNOS augmentation systems.


1. It is widely felt that any method of rejecting observations with large deviations from the mean is open to some suspicion. Suppose that by some criterion, such as Peirce’s and Chauvenet’s, we decide to reject observations with deviations greater than 4 σ, where σ is the standard error, computed from the standard deviation by the usual rule; then we reject an observation deviating by 4·5 σ, and thereby alter the mean by about 4·5 σ/ n , where n is the number of observations, and at the same time we reduce the computed standard error. This may lead to the rejection of another observation deviating from the original mean by less than 4 σ, and if the process is repeated the mean may be shifted so much as to lead to doubt as to whether it is really sufficiently representative of the observations. In many cases, where we suspect that some abnormal cause has affected a fraction of the observations, there is a legitimate doubt as to whether it has affected a particular observation. Suppose that we have 50 observations. Then there is an even chance, according to the normal law, of a deviation exceeding 2·33 σ. But a deviation of 3 σ or more is not impossible, and if we make a mistake in rejecting it the mean of the remainder is not the most probable value. On the other hand, an observation deviating by only 2 σ may be affected by an abnormal cause of error, and then we should err in retaining it, even though no existing rule will instruct us to reject such an observation. It seems clear that the probability that a given observation has been affected by an abnormal cause of error is a continuous function of the deviation; it is never certain or impossible that it has been so affected, and a process that completely rejects certain observations, while retaining with full weight others with comparable deviations, possibly in the opposite direction, is unsatisfactory in principle.


Author(s):  
Mingwen Yang ◽  
Zhiqiang (Eric) Zheng ◽  
Vijay Mookerjee

Online reputation has become a key marketing-mix variable in the digital economy. Our study helps managers decide on the effort they should use to manage online reputation. We consider an online reputation race in which it is important not just to manage the absolute reputation, but also the relative rating. That is, to stay ahead, a firm should try to have ratings that are better than those of its competitors. Our findings are particularly significant for platform owners (such as Expedia or Yelp) to strategically grow their base of participating firms: growing the middle of the market (firms with average ratings) is the best option considering the goals of the platform and the other stakeholders, namely incumbents and consumers. For firms, we find that they should increase their effort when the mean market rating increases. Another key insight for firms is that, sometimes, adversity can come disguised as an opportunity. When an adverse event strikes the industry (such as a reduction in sales margin or an increase in the cost of effort), a firm’s profit can increase if it can manage this event better than its competitors.


2010 ◽  
Vol 67 (5) ◽  
pp. 1655-1666 ◽  
Author(s):  
David M. Romps ◽  
Zhiming Kuang

Abstract Tracers are used in a large-eddy simulation of shallow convection to show that stochastic entrainment (and not cloud-base properties) determines the fate of convecting parcels. The tracers are used to diagnose the correlations between a parcel’s state above the cloud base and both the parcel’s state at the cloud base and its entrainment history. The correlation with the cloud-base state goes to zero a few hundred meters above the cloud base. On the other hand, correlations between a parcel’s state and its net entrainment are large. Evidence is found that the entrainment events may be described as a stochastic Poisson process. A parcel model is constructed with stochastic entrainment that is able to replicate the mean and standard deviation of cloud properties. Turning off cloud-base variability has little effect on the results, which suggests that stochastic mass-flux models may be initialized with a single set of properties. The success of the stochastic parcel model suggests that it holds promise as the framework for a convective parameterization.


2011 ◽  
Vol 7 (4) ◽  
pp. 47-64 ◽  
Author(s):  
Toly Chen

This paper presents a dynamically optimized fluctuation smoothing rule to improve the performance of scheduling jobs in a wafer fabrication factory. The rule has been modified from the four-factor bi-criteria nonlinear fluctuation smoothing (4f-biNFS) rule, by dynamically adjusting factors. Some properties of the dynamically optimized fluctuation smoothing rule were also discussed theoretically. In addition, production simulation was also applied to generate some test data for evaluating the effectiveness of the proposed methodology. According to the experimental results, the proposed methodology was better than some existing approaches to reduce the average cycle time and cycle time standard deviation. The results also showed that it was possible to improve the performance of one without sacrificing the other performance metrics.


2011 ◽  
Vol 125 (12) ◽  
pp. 1244-1246 ◽  
Author(s):  
A Hesham ◽  
A Ghali

AbstractObjective:To compare Rapid Rhino and Merocel packs for nasal packing after septoplasty, in terms of patient tolerance (both with the pack in place and during removal) and post-operative complications.Materials and methods:Thirty patients (aged 18–40 years) scheduled for septoplasty were included. Following surgery, one nasal cavity was packed with Rapid Rhino and the other one with Merocel. Patients were asked to record pain levels on a visual analogue score, on both sides, with the packs in situ and during their removal the next day. After pack removal, bleeding was compared on both sides.Results:The mean ± standard deviation pain score for the Rapid Rhino pack in situ (4.17 ± 1.78) was less than that for the Merocel pack (4.73 ± 2.05), but not significantly so (p = 0.314). The mean pain score for Rapid Rhino pack removal (4.13 ± 1.76) was significantly less that that for Merocel (6.90 ± 1.67; p = 0.001). Bleeding after pack removal was significantly less for the Rapid Rhino sides compared with the Merocel sides (p <0.05).Conclusion:Rapid Rhino nasal packs are less painful and cause less bleeding, compared with Merocel packs, with no side effects. Thus, their use for nasal packing after septal surgery is recommended.


2010 ◽  
Vol 27 (3) ◽  
pp. 470-480 ◽  
Author(s):  
Chee-Kiat Teo ◽  
Tieh-Yong Koh

Abstract A statistical method to correct for the limb effect in off-nadir Atmospheric Infrared Sounder (AIRS) channel radiances is described, using the channel radiance itself and principal components (PCs) of the other channel radiances to account for the multicollinearity. A method of selecting an optimal set of predictors is proposed and demonstrated for one- and two-PC predictors. Validation results with a subset of AIRS channels in the spectral region 649–2664 cm−1 show that the mean nadir-corrected brightness temperature (BT) is largely independent of scan angle. More than 66% of the channels have a root-mean-square (rms) bias less than 0.10 K after nadir correction. Limb effect on the standard deviation (SD) of BT is discernible at larger scan angles, mainly for the atmospheric windows and the water vapor channels around 6.7 μm. After nadir correction, nearly all atmospheric window channels unaffected by solar glint and more than 76% of water vapor channels examined have BT SDs brought closer to nadir values. For the window channels affected by solar glint (wavenumber &gt; 2490 cm−1), BT SDs at the scan angles with the strongest impact from solar reflection were improved on average by more than 0.6 K after nadir correction.


2020 ◽  
Vol 4 (8) ◽  
pp. 38-54
Author(s):  
Luzmila De Jesús Carvajal Andrade ◽  
◽  
Belén del Rocío Logacho Villacís ◽  
Ramiro Rogelio Rojas Jaramillo ◽  
◽  
...  

This research has been conducted in order to determine the Prevalence of the burnout syndrome among students from third to eighth semester who are attending the Nursing School. It was a prevalence study; the data were collected using the survey’s technique, in a questionnaire divided in five sections applied to 172 students. The information analysis was calculated using the Mean method and the Standard Deviation for the Academic burnout, while for the Labor burnout it was utilized the punctuation of: high, medium and low scales. The outcome results showed that the prevalence of the burnout Syndrome in both academic and labor was low. The 2.3 % of students had the Academic burnout, (Confidence interval: 95%, lower limit: 0.44% and upper limit: 4.21%) with a probability of 3.52%, on the other hand the Labor burnout was of 4% among students in the shifting internship, with a probability of 1.22%. Key words: Syndrome burnout, Stress, nursing.


2020 ◽  
pp. 1776
Author(s):  
Shadi F. Gharaibeh ◽  
Linda Tahaineh

Objective: To determine the accuracy, variability, and weight uniformity of tablet subdivision techniques utilized to divide the tablets of five drug products that are commonly prescribed for use as half tablets in Jordan.  Methods: Ten random tablets of five commonly subdivided drug products were weighed and subdivided using three subdivision techniques: hand breaking, kitchen knife, and tablet cutter. The five commonly subdivided drug products (warfarin 5 mg, levothyroxine 50 μg, levothyroxine 100 μg, candesartan 16 mg, and carvedilol 25 mg) were weighed. The weights were analyzed for acceptance, accuracy, and variability. Weight variation acceptance criteria were adopted in this work as a tool to indicate the properness of the subdivision techniques used to produce acceptable half tablets. Other relevant physical characteristics of the five products such as tablet shape, dimensions, face curvature, score depth, and crushing strength were measured.  Results: All tablets were round in shape, had weights that ranged between 100.63 mg (standard deviation=0.99) and 379.04 mg (standard deviation=3.00), and had crushing strengths that ranged between 23.29 N (standard deviation=3.58)and 103.35 N (standard deviation=14.98). Both candesartan and carvedilol were bi-convex in shape with an extent of face curvature equal to about 33%. In addition, percentage score depth of the tablets had a range between 0% and 24%. The accuracy and variability of subdivision varied according to the subdivision technique used and tablet characteristics. Accuracy range was between 81% and 109.8%. Moreover, the relative standard deviation was between 1.5% and 17.4%. Warfarin 5 mg subdivided tablets failed the weight variation test regardless of the subdivision technique used. Subdivision by hand produced half tablets that were acceptable for levothyroxine 50 μg and levothyroxine 100 μg. Subdivision by knife produced half tablets that were acceptable only for candesartan tablets. However, the tablet cutter produced half tablets that passed the weight variation test for four out of the five drug products tested in this study. Conclusions: The tablet cutter performed better than the other subdivision techniques used. It produced half tablets that passed the weight uniformity test for four drug products out of the five.


Sign in / Sign up

Export Citation Format

Share Document