random samples
Recently Published Documents


TOTAL DOCUMENTS

954
(FIVE YEARS 244)

H-INDEX

42
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Prabhu Govindasamy ◽  
Sonu Kumar Mahawer ◽  
Jake Mowrer ◽  
Muthukumar Bagavathiannan ◽  
Mahendra Prasad ◽  
...  

Abstract Purpose: The use of cost-effective methods for measurement of WHC is common in underdeveloped and developing countries, but the accuracy of these cost-effective methods compared to the sophisticated and more expensive alternatives is unclear. Methods: To compare different WHC measurement methods, 30 random samples of clay loam and sandy clay loam soils of Jhansi, India were used. The methods compared here were: FAO in-situ method (FAO), Keen Raczkowski box method (KM), funnel method (FM), column method (CM) and pressure plate method (PPA). Results: For WHC measurements the PPA results were comparable to KM and FM methods for sandy clay loam, and KM and FAO methods for clay loam. Conclusion: Therefore, until a reliable method that matches the results of sophisticated analytical methods of soil water measurement is available, different inexpensive analytical methods can be used, but they must be chosen with caution. The findings from this study will facilitate appropriate selection of a suitable method.


2022 ◽  
Vol 19 (4) ◽  
Author(s):  
Ebrahim Alinia-Ahandani ◽  
Ali Akbar Malekirad ◽  
Habibollah Nazem ◽  
Mohammad Fazilati ◽  
Hossein Salavati ◽  
...  

: Heavy metals cause significant issues when people are exposed to many specific types of them. They can cause many disorders and affect the biochemical pathways in the body. Herbs are known as one of the richest sources of modern patented drugs, particularly in Iranian references. Many metals, particularly heavy metals, are toxic. Various studies have shown a higher level of heavy metals than standards in some countries like Iran, Pakistan, Egypt, and Nigeria. A preliminary study was conducted to determine some toxic elements in powdered Ziziphora (Ziziphora persica) collected from the local market in Lahijan city, northern Iran. Twenty random samples were gathered from various markets, and a flame atomic absorption spectrophotometer (FAAS) was used to detect some featured toxic elements, including copper (Cu), cadmium (Cd), lead (Pb), zinc (Zn), and mercury (Hg). The results showed higher Pb, Cd, and Hg levels than standards. Besides, Cu and Zn were detected to be lower than standards.


2021 ◽  
Vol 4 (2) ◽  
pp. 122-132
Author(s):  
Joachim I. Krueger

Historiographic analysis is underused in academic psychology. In this expository essay, I intend to show that historical events or persons can be described with reference to theory and research provided by empirical psychology. Besides providing evidence-based grounds for a more penetrating historical account, the conclusions drawn from a historiographic analysis may feedback into psychological theory by generating new testable hypotheses. Whereas standard empirical research is focused on statistical associations among quantitative variables obtained in random samples, historiographic analysis is most informative with the use of extreme cases, that is, by asking and showing the limits of what is possible. This essay focuses on the story of Gonzalo Guerrero to explore psychological processes involved in identity transformation.


2021 ◽  
Vol 12 (3) ◽  
pp. 4-16
Author(s):  
N. M. Bulanov ◽  
A. Yu. Suvorov ◽  
O. B. Blyuss ◽  
D. B. Munblit ◽  
D. V. Butnaru ◽  
...  

Descriptive statistics provides tools to explore, summarize and illustrate the research data. In this tutorial we discuss two main types of data - qualitative and quantitative variables, and the most common approaches to characterize data distribution numerically and graphically. This article presents two important sets of parameters - measures of the central tendency (mean, median and mode) and variation (standard deviation, quantiles) and suggests the most suitable conditions for their application. We explain the difference between the general population and random samples, that are usually analyzed in studies. The parameters which characterize the sample (for example, measures of the central tendency) are point estimates, that can differ from the respective parameters of the general population. We introduce the concept of confidence interval - the range of values, which likely includes the true value of the parameter for the general population. All concepts and definitions are illustrated with examples, which simulate the research data.


Author(s):  
Alexander Tucci ◽  
Elena Plante ◽  
John J. Heilmann ◽  
Jon F. Miller

Purpose: This exploratory study sought to establish the psychometric stability of a dynamic norming system using the Systematic Analysis of Language Transcripts (SALT) databases. Dynamic norming is the process by which clinicians select a subset of the normative database sample matched to their individual client's demographic characteristics. Method: The English Conversation and Student-Selected Story (SSS) Narrative databases from SALT were used to conduct the analyses in two phases. Phase 1 was an exploratory examination of the standard error of measure (SEM) of six clinically relevant transcript metrics at predetermined sampling intervals to determine (a) whether the dynamic norming process resulted in samples with adequate stability and (b) the minimum sample size required for stable results. Phase 2 was confirmatory, as random samples were taken from the SALT databases to simulate clinical comparison samples. These samples were examined (a) for stability of SEM estimations and (b) to confirm the sample size findings from Phase 1. Results: Results of Phase 1 indicated that the SEMs for the six transcript metrics across both databases were low relative to each metric's scale. Samples as small as 40–50 children in the Conversation database and 20–30 children in the SSS Narrative database resulted in stable SEM estimations. Phase 2 confirmed these findings, indicating that age bands as small as ±4 months from a given center-point resulted in stable estimations provided there were approximately 35 children or more in the comparison sample. Conclusion: Psychometrically stable comparison samples can be achieved using SALT's dynamic norming system that are much smaller than the standard sample size recommended in most tests of children's language.


Author(s):  
Louis J. M. Aslett

AbstractModels which are constructed to represent the uncertainty arising in engineered systems can often be quite complex to ensure they provide a reasonably faithful reflection of the real-world system. As a result, even computation of simple expectations, event probabilities, variances, or integration over utilities for a decision problem can be analytically intractable. Indeed, such models are often sufficiently high dimensional that even traditional numerical methods perform poorly. However, access to random samples drawn from the probability model under study typically simplifies such problems substantially. The methodologies to generate and use such samples fall under the stable of techniques usually referred to as ‘Monte Carlo methods’. This chapter provides a motivation, simple primer introduction to the basics, and sign-posts to further reading and literature on Monte Carlo methods, in a manner that should be accessible to those with an engineering mathematics background. There is deliberately informal mathematical presentation which avoids measure-theoretic formalism. The accompanying lecture can be viewed at https://www.louisaslett.com/Courses/UTOPIAE/.


Author(s):  
Satish Konda ◽  
Mehra, K.L. ◽  
Ramakrishnaiah Y.S.

The problem considered in the present paper is estimation of mixing proportions of mixtures of two (known) distributions by using the minimum weighted square distance (MWSD) method. The two classes of smoothed and unsmoothed parametric estimators of mixing proportion proposed in a sense of MWSD due to Wolfowitz(1953) in a mixture model F(x)=p (x)+(1-p) (x) based on three independent and identically distributed random samples of sizes n and , =1,2 from the mixture and two component populations. Comparisons are made based on their derived mean square errors (MSE). The superiority of smoothed estimator over unsmoothed one is established theoretically and also conducting Monte-Carlo study in sense of minimum mean square error criterion. Large sample properties such as rates of a.s. convergence and asymptotic normality of these estimators are also established. The results thus established here are completely new in the literature.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Federico Barravecchia ◽  
Luca Mastrogiacomo ◽  
Fiorenzo Franceschini

PurposeDigital voice-of-customer (digital VoC) analysis is gaining much attention in the field of quality management. Digital VoC can be a great source of knowledge about customer needs, habits and expectations. To this end, the most popular approach is based on the application of text mining algorithms named topic modelling. These algorithms can identify latent topics discussed within digital VoC and categorise each source (e.g. each review) based on its content. This paper aims to propose a structured procedure for validating the results produced by topic modelling algorithms.Design/methodology/approachThe proposed procedure compares, on random samples, the results produced by topic modelling algorithms with those generated by human evaluators. The use of specific metrics allows to make a comparison between the two approaches and to provide a preliminary empirical validation.FindingsThe proposed procedure can address users of topic modelling algorithms in validating the obtained results. An application case study related to some car-sharing services supports the description.Originality/valueDespite the vast success of topic modelling-based approaches, metrics and procedures to validate the obtained results are still lacking. This paper provides a first practical and structured validation procedure specifically employed for quality-related applications.


Sign in / Sign up

Export Citation Format

Share Document