Pendugaan Standar Deviasi untuk Sampel Kecil dalam Penelitian Pertanian

2019 ◽  
Author(s):  
Weksi Budiaji

There is a rule of thumb formula to estimate a standard deviation for a small sample (2 to 15 samples). This formula is based on a ratio between the range data (maximum minus minimum) and the square root of the number of the samples. Although this formula has a small bias, a Mantel formula used some constants instead of the square root of the number of the samples that produced even a smaller bias to a normal and a uniform distributed data. In this manuscript, we modify three constants that Mantel formula is proposed when the number of objects is equal to 2 and 3. We also add a rule that the data must be assumed to be a standard normal (N (0,1)) or a standard uniform (U (0,1)) distribution. (citation: Budiaji W., Suherna, S., Salampessy, YLA. 2012. Pendugaan Standar Deviasi untuk Sampel Kecil dalam Penelitian Pertanian. Jurnal Ilmu Pertanian dan Perikanan Vol. 1 No. 1 Hal: 37-42)

Foods ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 169
Author(s):  
Rosa Maria Fanelli

The principal aim of this study is to explore the effects of the first lockdown of the Coronavirus Disease 2019 (COVID-19) pandemic on changes in food consumption and food-related behaviour on a diverse sample of Italian consumers aged ≥18 years. To achieve this aim, the research path starts with an investigation of some of the first few studies conducted on Italian consumers. It then reports the findings of a pilot survey carried out on a small sample of Italian consumes who live in Molise. The studies chosen for investigation were published as articles or research reports. In total, six relevant studies were chosen, each involving a different sized sample of Italian consumers. The average number of respondents is 2142, with a standard deviation of 1260.56. A distinction is made between the results of the articles, the research reports, and the pilot survey. The latter was conducted to develop and validate the components of a new questionnaire and, furthermore, to assess changes in the eating habits of individuals during the COVID-19 pandemic. The results suggest that the effects of the pandemic on consumer behaviour can, above all, be grouped into changes related to shopping for food, eating habits, and food-related behaviour. This article can serve as the basis for future research in this area as it identifies and highlights key changes, in addition to comparing the earliest evidence available, using a critical approach.


2002 ◽  
Vol 738 ◽  
Author(s):  
Christian L. Petersen ◽  
Daniel Worledge ◽  
Peter R. E. Petersen

ABSTRACTWe have investigated the reproducibility of micro- and nano-scale measurements of sheet resistance performed with micro-fabricated multi-point probes. The probes consisted of Au coated SiO2 cantilevers extending from a Si base. The measurements were done with a four-point probe technique on thin Au films, the probe electrode spacing ranging from 18 μm to 1.5 μm. We find that the standard deviation of repeated sheet resistance measurements ranges from 0.2% at 18 μm spacing to 2.6% at 1.5 μm spacing. It is inversely proportional to the probe electrode spacing. This behaviour is expected if the resolution of the measurements is governed by the positional errors of the probe electrode tips. The corresponding standard deviation of the probe tip positions (in both lateral directions) is calculated to be approximately 20 nm. We argue that these positional errors depend on the probe cantilever amplitude at the time of contacting the surface. The amplitude is inversely proportional to the square root of the cantilever spring constant, indicating that stiff cantilevers give the best reproducibility. We estimate the limiting reproducibility of multi-point probes with nano-scale electrode spacing.


1989 ◽  
Vol 147 ◽  
Author(s):  
D. L. Dugger ◽  
M. B. Stern ◽  
T. M. Rubico

AbstractThe distribution of Mg+ (a p-type dopant for GaAs) and As+ (an p-type dopant for Si) implanted into both photoresist (PR) and polyimide (PI) have been determined experimentally. Range data of Mg ions at 200 keV and 300 keV and As ions at 150 keV have been measured by Secondary Ion Mass Spectroscopy (SIMS). SIMS values for the projected range Rp and the standard deviation ARp were compared to range profile data calculated using the Projected Range Algorithm (PRAL) of Biersack [1] as well as the standard LSS theory [2]. While the values for Rp calculated from the PRAL model generally agreed within 10% of the SIMS values, the calculations underestimated Rp for PR but were in good agreement for PI. The LSS calculations underestimated Rp in both materials.


2018 ◽  
Vol 36 (6_suppl) ◽  
pp. 563-563
Author(s):  
Kevin George King ◽  
Sumeet Bhanvadia ◽  
Saum Ghodoussipour ◽  
Darryl Hwang ◽  
Bino Varghese ◽  
...  

563 Background: In metastatic nonseminomatous testicular germ cell tumor (NSGCT), post-chemotherapy retroperitoneal lymph node dissection (PC-RPLND) is indicated for residual masses > 1 cm because of these 45% will be fibrosis/necrosis, 45% will be teratoma and 15% will be viable malignancy. There is no imaging test that reliably distinguishes lymph nodes (LNs) with tumor (teratoma or malignancy) from LNs with fibrosis/necrosis. We evaluated whether quantitative CT texture analysis (TA) could make this differentiation. Methods: Pre- and post-chemotherapy CTs (all same phase and slice thickness) were reviewed in 22 NSGCT patients with RP LNs > 1 cm post chemotherapy. After manual segmentation of RP LNs on a 3D workstation, 187 TA metrics were derived, using 2D/3D gray-level co-occurrence matrix (GLCM), 2D/3D gray-level difference matrix (GLDM), and spectral analysis. Metrics were derived 2 ways: from post-chemotherapy CTs alone, and also as a difference between pre- and post-chemotherapy CTs, resulting in 374 metrics. PC-RPLND pathology was correlated with CT data at 88 LN stations in these 22 patients. Results: 15 imaging metrics showed a significant difference (p ≤ 0.05) between LN stations with only fibrosis/necrosis and those with teratoma or viable tumor. Seven were derived from the difference between pre- and post-chemotherapy CTs: 4 using a 2D GLCM (coronal standard deviation, coronal square root of variance, coronal mean, and coronal sum of average), and 3 using a 2D GLDM (axial variance, axial square root of variance, and coronal variance). The other 8 were derived from post-chemotherapy CTs alone: 7 using a 2D GLCM (sagittal square root of variance, sagittal standard deviation, coronal square root of variance, coronal mean, coronal standard deviation, coronal sum of average, and coronal entropy) and 1 using a 2D GLDM (sagittal sum entropy). Conclusions: CT TA shows promise in differentiating necrosis from teratoma or viable tumor in RP LNs in post-chemotherapy NSGCT. A larger study is needed to further test this method, towards a long-term goal of potentially allowing some patients to avoid PC-RPLND.


2006 ◽  
Vol 2 (S240) ◽  
pp. 261-263 ◽  
Author(s):  
R. Neuhäuser ◽  
A. Seifahrt ◽  
T. Röll ◽  
A. Bedalov ◽  
M. Mugrauer

AbstractMany planet candidates have been detected by radial-velocity variations of the primary star; they are planet candidates, because of the unknown orbit inclination. Detection of the wobble in the two other dimensions, to be measured by astrometry, would yield the inclination and, hence, true mass of the companions. We aim to show that planets can be confirmed or discovered in a close visual stellar binary system by measuring the astrometric wobble of the exoplanet host star as a periodic variation of the separation, even from the ground. We test the feasibility with HD 19994, a visual binary with one radial velocity planet candidate. We use the adaptive optics camera NACO at the VLT with its smallest pixel scale (∼ 13 mas) for high-precision astrometric measurements. The separations measured in 120 single images taken within one night are shown to follow white noise, so that the standard deviation can be divided by the square root of the number of images to obtain the precision. In this paper we present the first results and investigate the achievable precision in relative astrometry with adaptive optics. With careful data reduction it is possible to achieve a relative astrometric precision as low as 50 μ as for a 0″.6 binary with VLT/NACO observations in one hour, the best relative astrometric precision ever achieved with a single telescope from the ground. The relative astrometric precision demonstrated here with AO at an 8-m mirror is sufficient to detect the astrometric signal of the planet HD 19994 Ab as periodic variation of the separation between HD 19994 A and B.


Author(s):  
Zhigang Wei ◽  
Limin Luo ◽  
Fulun Yang ◽  
Robert Rebandt

Fatigue design curve construction is commonly used for durability and reliability assessment of engineering components subjected to cyclic loading. A wide variety of design curve construction methods have been developed over the last decades. Some of the methods have been adopted by engineering codes and widely used in industry. However, the traditional design curve construction methods usually require significant amounts of test data in order for the constructed design curves to be consistently and reliably used in product design and validation. In order to reduce the test sample size and associated testing time and cost, several Bayesian statistics based design curve construction methods have been recently successfully developed by several research groups. Among all of these methods, an efficient Monte Carlo simulation based resampling method developed by the authors of this paper is of particular importance. The method is based on a large amount of reliable historical fatigue test data, the associated probabilistic distributions of the mean and standard deviation of the failure cycles, and an advanced acceptance-rejection resampling algorithm. However, finite element analysis (FEA) methods and a special stress recovery technique are required to process the test data, which is usually a time-consuming process. A more straightforward approach that does not require these intermediate processes is strongly preferred. This study presents such an approach, in which the only historical information needed is the distribution of the standard deviation of the cycles to failure. The distribution of the mean is directly calculated from the current tested data and the Central Limit Theorem. Neither FEA nor stress recovery technique is required for this approach, and the effort put into design curve construction can be significantly reduced. This method can be used to complement the previously developed Bayesian methods.


Author(s):  
M. Forhad Hossain ◽  
Mian Arif Shams Adnan ◽  
A.H. Joarder

2000 ◽  
Vol 6 (3) ◽  
pp. 364-364 ◽  
Author(s):  
NANCY R. TEMKIN ◽  
ROBERT K. HEATON ◽  
IGOR GRANT ◽  
SUREYYA S. DIKMEN

Hinton-Bayre (2000) raises a point that may occur to many readers who are familiar with the Reliable Change Index (RCI). In our previous paper comparing four models for detecting significant change in neuropsychological performance (Temkin et al., 1999), we used a formula for calculating Sdiff, the measure of variability for the test–retest difference, that differs from the one Hinton-Bayre has seen employed in other studies of the RCI. In fact, there are two ways of calculating Sdiff—a direct method and an approximate method. As stated by Jacobson and Truax (1991, p. 14), the direct method is to compute “the standard error of the difference between the two test scores” or equivalently [begin square root](s12 + s22 − 2s1s2rxx′)[end square root] where si is the standard deviation at time i and rxx′ is the test–retest correlation or reliability coefficient. Jacobson and Truax also provide a formula for the approximation of Sdiff when one does not have access to retest data on the population of interest, but does have a test–retest reliability coefficient and an estimate of the cross-sectional standard deviation, i.e., the standard deviation at a single point in time. This approximation assumes that the standard deviations at Time 1 and Time 2 are equal, which may be close to true in many cases. Since we had the longitudinal data to directly calculate the standard error of the difference between scores at Time 1 and Time 2, we used the direct method. Which method is preferable? When the needed data are available, it is the one we used.


2021 ◽  
Vol 11 (24) ◽  
pp. 11632
Author(s):  
En Xie ◽  
Yizhong Ma ◽  
Linhan Ouyang ◽  
Chanseok Park

The conventional sample range is widely used for the construction of an R-chart. In an R-chart, the sample range estimates the standard deviation, especially in the case of a small sample size. It is well known that the performance of the sample range degrades in the case of a large sample size. In this paper, we investigate the sample subrange as an alternative to the range. This subrange includes the range as a special case. We recognize that we can improve the performance of estimating the standard deviation by using the subrange, especially in the case of a large sample size. Note that the original sample range is biased. Thus, the correction factor is used to make it unbiased. Likewise, the original subrange is also biased. In this paper, we provide the correction factor for the subrange. To compare the sample subranges with different trims to the conventional sample range or the sample standard deviation, we provide the theoretical relative efficiency and its values, which can be used to select the best trim of the subrange with the sense of maximizing the relative efficiency. For a practical guideline, we also provide a simple formula for the best trim amount, which is obtained by the least-squares method. It is worth noting that the breakdown point of the conventional sample range is always zero, while that of the sample subrange increases proportionally to a trim amount. As an application of the proposed method, we illustrate how to incorporate it into the construction of the R-chart.


Sign in / Sign up

Export Citation Format

Share Document