Expected consequences of the segregation of a major gene in a sheep population in relation to observations on the ovulation rate of a flock of Cambridge sheep

1990 ◽  
Vol 51 (2) ◽  
pp. 277-282 ◽  
Author(s):  
J. B. Owen ◽  
C. J. Whitaker ◽  
R. F. E. Axford ◽  
I. Ap Dewi

ABSTRACTA simple model was derived relating the phenotypic effect (g) of a major gene to observed values of the population mean and variance for a trait, at specified values of the major gene frequency and at specified basal values of the population mean and variance (in the absence of the major gene). This model was applied to a total of 549 observed values of ovulation rate in ewes of the Cambridge breed at Bangor under a range of assumptions. The mean values of ovulation rate were 2·44 for 243 ewes of 1 year of age and 37·54 for 306 ewes of 2 and 3 years of age with a coefficient of variation for both age sets of 0·50.The results indicate a minimum value for g, in this data set, of 1·07 for 1 year old and 1·72 for 2 and 3 year old ewes. The results are also consistent with a frequency value in the region of 0·3 to 0·4, with the absence of dominance and with a reasonable concordance with Hardy-Weinburg equilibrium. The results also indicate that the value of g varies according to the background phenotype since it is lower for younger as compared with older ewes.

1969 ◽  
Vol 13 (2) ◽  
pp. 117-126 ◽  
Author(s):  
Derek J. Pike

Robertson (1960) used probability transition matrices to estimate changes in gene frequency when sampling and selection are applied to a finite population. Curnow & Baker (1968) used Kojima's (1961) approximate formulae for the mean and variance of the change in gene frequency from a single cycle of selection applied to a finite population to develop an iterative procedure for studying the effects of repeated cycles of selection and regeneration. To do this they assumed a beta distribution for the unfixed gene frequencies at each generation.These two methods are discussed and a result used in Kojima's paper is proved. A number of sets of calculations are carried out using both methods and the results are compared to assess the accuracy of Curnow & Baker's method in relation to Robertson's approach.It is found that the one real fault in the Curnow-Baker method is its tendency to fix too high a proportion of the genes, particularly when the initial gene frequency is near to a fixation point. This fault is largely overcome when more individuals are selected. For selection of eight or more individuals the Curnow-Baker method is very accurate and appreciably faster than the transition matrix method.


2021 ◽  
pp. 58-60
Author(s):  
Naziru Fadisanku Haruna ◽  
Ran Vijay Kumar Singh ◽  
Samsudeen Dahiru

In This paper a modied ratio-type estimator for nite population mean under stratied random sampling using single auxiliary variable has been proposed. The expression for mean square error and bias of the proposed estimator are derived up to the rst order of approximation. The expression for minimum mean square error of proposed estimator is also obtained. The mean square error the proposed estimator is compared with other existing estimators theoretically and condition are obtained under which proposed estimator performed better. A real life population data set has been considered to compare the efciency of the proposed estimator numerically.


2015 ◽  
Vol 2015 ◽  
pp. 1-5 ◽  
Author(s):  
Mursala Khan ◽  
Rajesh Singh

A chain ratio-type estimator is proposed for the estimation of finite population mean under systematic sampling scheme using two auxiliary variables. The mean square error of the proposed estimator is derived up to the first order of approximation and is compared with other relevant existing estimators. To illustrate the performances of the different estimators in comparison with the usual simple estimator, we have taken a real data set from the literature of survey sampling.


Author(s):  
Hani M. Samawi ◽  
Eman M. Tawalbeh

The performance of a regression estimator based on the double ranked set sample (DRSS) scheme, introduced by Al-Saleh and Al-Kadiri (2000), is investigated when the mean of the auxiliary variable X is unknown. Our primary analysis and simulation indicates that using the DRSS regression estimator for estimating the population mean substantially increases relative efficiency compared to using regression estimator based on simple random sampling (SRS) or ranked set sampling (RSS) (Yu and Lam, 1997) regression estimator.  Moreover, the regression estimator using DRSS is also more efficient than the naïve estimators of the population mean using SRS, RSS (when the correlation coefficient is at least 0.4) and DRSS for high correlation coefficient (at least 0.91.) The theory is illustrated using a real data set of trees.  


the ‘Area Under the Curve’ or AUC. The AUC is taken as a measure of exposure of the drug to the subject. The peak or maximum concen-tration is referred to as Cmax and is an important safety measure. For regulatory approval of bioequivalence it is necessary to show from the trial results that the mean values of AUC and Cmax for T and R are not significantly different. The AUC is calculated by adding up the ar-eas of the regions identified by the vertical lines under the plot in Figure 7.1 using an arithmetic technique such as the trapezoidal rule (see, for example, Welling, 1986, 145–149, Rowland and Tozer, 1995, 469–471). Experience (e.g., FDA Guidance, 1992, 1997, 1999b, 2001) has dictated that AUC and Cmax need to be transformed to the natural logarithmic scale prior to analysis if the usual assumptions of normally distributed errors are to be made. Each of AUC and Cmax is analyzed separately and there is no adjustment to significance levels to allow for multiple testing (Hauck et al., 1995). We will refer to the derived variates as log(AUC) and log(Cmax), respectively. In bioequivalence trials there should be a wash-out period of at least five half-lives of the drugs between the active treatment periods. If this is the case, and there are no detectable pre-dose drug concentrations, there is no need to assume that carry-over effects are present and so it is not necessary to test for a differential carry-over effect (FDA Guidance, 2001). The model that is fitted to the data will be the one used in Section 5.3 of Chapter 5, which contains terms for subjects, periods and treatments. Following common practice we will also fit a sequence or group effect and consider subjects as a random effect nested within sequence. An example of fitting this model will be given in the next section. In the following sections we will consider three forms of bioequivalence: average (ABE), population (PBE) and individual (IBE). To simplify the following discussion we will refer only to log(AUC); the discussion for log(Cmax) is identical. To show that T and R are average bioequivalent it is only necessary to show that the mean log(AUC) for T is not significantly different from the mean log(AUC) for R. In other words we need to show that, ‘on average’, in the population of intended patients, the two drugs are bioequivalent. This measure does not take into account the variability of T and R. It is possible for one drug to be much more variable than the other, yet be similar in terms of mean log(AUC). It was for this reason that PBE was introduced. As we will see in Section 7.5, the measure of PBE that has been recommended by the regulators is a mixture of the mean and variance of the log(AUC) values (FDA Guidance, 1997, 1999a,b, 2000, 2001). Of course, two drugs could be similar in mean and variance over the


2020 ◽  
Author(s):  
Faezeh Bayat ◽  
Maxwell Libbrecht

AbstractMotivationA sequencing-based genomic assay such as ChIP-seq outputs a real-valued signal for each position in the genome that measures the strength of activity at that position. Most genomic signals lack the property of variance stabilization. That is, a difference between 100 and 200 reads usually has a very different statistical importance from a difference between 1,100 and 1,200 reads. A statistical model such as a negative binomial distribution can account for this pattern, but learning these models is computationally challenging. Therefore, many applications—including imputation and segmentation and genome annotation (SAGA)—instead use Gaussian models and use a transformation such as log or inverse hyperbolic sine (asinh) to stabilize variance.ResultsWe show here that existing transformations do not fully stabilize variance in genomic data sets. To solve this issue, we propose VSS, a method that produces variance-stabilized signals for sequencingbased genomic signals. VSS learns the empirical relationship between the mean and variance of a given signal data set and produces transformed signals that normalize for this dependence. We show that VSS successfully stabilizes variance and that doing so improves downstream applications such as SAGA. VSS will eliminate the need for downstream methods to implement complex mean-variance relationship models, and will enable genomic signals to be easily understood by [email protected]://github.com/faezeh-bayat/Variance-stabilized-units-for-sequencing-based-genomic-signals.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Erliza Yuniarti ◽  
Wardiman Wardiman ◽  
Wirangga Wirangga ◽  
Bengawan Alfarezi

This paper discusses improving the accuracy of electrical load forecasting by imputation on empty load data. It is important to estimate the demand for electricity loads for the power plant operating system, fuel supply and maintenance of the power system. The forecast of the electrical load is carried out on the basis of the historical data of electrical load which is generally represented in the load curve. The load curves in research at the Singkarak substation Borang show that there are several load patterns, some missing data and data that is suddenly increasing. The percentage of blank data in 2015 was 1.8379%, while   the highest  in September at was 0.5137% or 45 hours. To fill in the missing data, three imputation techniques were used,   i.e., filling in the data from the previous day's data at the same time ; perform regression analysis on the month  the data was missing; and using the mean values from  monthly data. The results of forecast using the moving average method provide a forecast of the  electrical load on January 1, 2016 wll be 138 kW. The Mean Absolute Error (MAE) for the best load forecast is 9.59, using a data set equipped with the imputation of the mean.


2016 ◽  
Vol 16 (21) ◽  
pp. 13399-13416 ◽  
Author(s):  
L. Paige Wright ◽  
Leiming Zhang ◽  
Frank J. Marsik

Abstract. The current knowledge concerning mercury dry deposition is reviewed, including dry-deposition algorithms used in chemical transport models (CTMs) and at monitoring sites and related deposition calculations, measurement methods and studies for quantifying dry deposition of gaseous oxidized mercury (GOM) and particulate bound mercury (PBM), and measurement studies of litterfall and throughfall mercury. Measured median GOM plus PBM dry deposition in Asia (10.7 µg m−2 yr−1) is almost double that in North America (6.1 µg m−2 yr−1) due to the higher anthropogenic emissions in Asia. The measured mean GOM plus PBM dry deposition in Asia (22.7 µg m−2 yr−1), however, is less than that in North America (30.8 µg m−2 yr−1). The variations between the median and mean values reflect the influences that single extreme measurements can have on the mean of a data set. Measured median litterfall and throughfall mercury are, respectively, 34.8 and 49.0 µg m−2 yr−1 in Asia, 12.8 and 16.3 µg m−2 yr−1 in Europe, and 11.9 and 7.0 µg m−2 yr−1 in North America. The corresponding measured mean litterfall and throughfall mercury are, respectively, 42.8 and 43.5 µg m−2 yr−1 in Asia, 14.2 and 19.0 µg m−2 yr−1 in Europe, and 12.9 and 9.3 µg m−2 yr−1 in North America. The much higher litterfall mercury than GOM plus PBM dry deposition suggests the important contribution of gaseous elemental mercy (GEM) to mercury dry deposition to vegetated canopies. Over all the regions, including the Amazon, dry deposition, estimated as the sum of litterfall and throughfall minus open-field wet deposition, is more dominant than wet deposition for Hg deposition. Regardless of the measurement or modelling method used, a factor of 2 or larger uncertainties in GOM plus PBM dry deposition need to be kept in mind when using these numbers for mercury impact studies.


2002 ◽  
Vol 2 (1) ◽  
pp. 75-107
Author(s):  
J. Schneider ◽  
R. Eixmann

Abstract. We have performed a three-year series of routine lidar measurements on a climatological base. To obtain an unbiased data set, the measurements were taken at preselected times. The measurements were performed between 1 December 1997, and 30 November 2000, at Kühlungsborn, Germany (54°07' N, 11°46' E). Using a Rayleigh/Mie/Raman lidar system, we measured the aerosol backscatter coefficients at three wavelengths in and above the planetary boundary layer. The aerosol extinction coefficient has been determined at 532 nm, but here the majority of the measurements has been restricted to heights above the boundary layer. Only after-sunset measurements are included in this data set since the Raman measurements were restricted to darkness. For the climatological analysis, we selected the cloud-free days out of a fixed measurement schedule. The annual cycle of the boundary layer height has been found to have a phase shift of about 25 days with respect to the summer/winter solstices. The mean values of the extinction and backscatter coefficients do not show significant annual differences. The backscatter coefficients in the planetary boundary layer were found to be about 10 times higher than above. The mean aerosol optical depth above the boundary layer and below 5 km is 0.26 (±1.0)  x 10-2 in summer, and 1.5 (±0.95)  x 10-2 in winter, which almost negligible compared to values measured in the boundary layer. A cluster analysis of the backward trajectories yielded two major directions of air mass origin above the planetary boundary layer and 4 major directions inside. A marked difference between the total aerosol load dependent on the air mass origin could be found for air masses originating from the west and travelling at high wind speeds. Comparing the measured spectral dependence of the backscatter coefficients with data from the Global Aerosol Data Set, we found a general agreement, but only a few conclusions with respect to the aerosol type could be draws due to the high variability of the measured backscatter coefficients.


2003 ◽  
Vol 31 (1) ◽  
pp. 31-46 ◽  
Author(s):  
Ferdinand Moldenhauer

The international validation study on alternative methods to replace the Draize rabbit eye irritation test, funded by the European Commission (EC) and the British Home Office (HO), took place during 1992–1994, and the results were published in 1995. The results of this EC/HO study are analysed by employing discriminant analysis, taking into account the classification of the in vivo data into eye irritation classes A (risk of serious damage to eyes), B (irritating to eyes) and NI (non-irritant). A data set for 59 test items was analysed, together with three subsets: surfactants, water-soluble chemicals, and water-insoluble chemicals. The new statistical methods of feature selection and estimation of the discriminant function's classification error were used. Normal distributed random numbers were added to the mean values of each in vitro endpoint, depending on the observed standard deviations. Thereafter, the reclassification error of the random observations was estimated by applying the fixed function of the mean values. Moreover, the leaving-one-out cross-classification method was applied to this random data set. Subsequently, random data were generated r times (for example, r = 1000) for a feature combination. Eighteen features were investigated in nine in vitro test systems to predict the effects of a chemical in the rabbit eye. 72.5% of the chemicals in the undivided sample were correctly classified when applying the in vitro endpoints lgNRU of the neutral red uptake test and lgBCOPo5 of the bovine opacity and permeability test. The accuracy increased to 80.9% when six in vitro features were used, and the sample was subdivided. The subset of surfactants was correctly classified in more than 90% of cases, which is an excellent performance.


Sign in / Sign up

Export Citation Format

Share Document