Comparing Baseline and Intervention Phases

2021 ◽  
pp. 57-89
Author(s):  
Charles Auerbach

In this chapter readers will learn about methodological issues to consider in analyzing the success of the intervention and how to conduct visual analysis. The chapter begins with a discussion of descriptive statistics that can aid the visual analysis of findings by summarizing patterns of data across phases. An example data set is used to illustrate the use of specific graphs, including box plots, standard deviation band graphs, and line charts showing the mean, median, and trimmed mean that can used to compare any two phases. SSD for R provides three standard methods for computing effect size, which are discussed in detail. Additionally, four methods of evaluating effect size using non-overlap methods are examined. The use of the goal line is discussed. The chapter concludes with a discussion of autocorrelation in the intervention phase and how to consider dealing with this issue.

2020 ◽  
pp. 393-421
Author(s):  
Sandra Halperin ◽  
Oliver Heath

This chapter deals with quantitative analysis, and especially description and inference. It introduces the reader to the principles of quantitative research and offers a step-by-step guide on how to use and interpret a range of commonly used techniques. The first part of the chapter considers the building blocks of quantitative analysis, with particular emphasis on different ways of summarizing data, both graphically and with tables, and ways of describing the distribution of one variable using univariate statistics. Two important measures are discussed: the mean and the standard deviation. After elaborating on descriptive statistics, the chapter explores inferential statistics and explains how to make generalizations. It also presents the concept of confidence intervals, more commonly known as the margin of error, and measures of central tendency.


2015 ◽  
Vol 8 (4) ◽  
pp. 1799-1818 ◽  
Author(s):  
R. A. Scheepmaker ◽  
C. Frankenberg ◽  
N. M. Deutscher ◽  
M. Schneider ◽  
S. Barthlott ◽  
...  

Abstract. Measurements of the atmospheric HDO/H2O ratio help us to better understand the hydrological cycle and improve models to correctly simulate tropospheric humidity and therefore climate change. We present an updated version of the column-averaged HDO/H2O ratio data set from the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY). The data set is extended with 2 additional years, now covering 2003–2007, and is validated against co-located ground-based total column δD measurements from Fourier transform spectrometers (FTS) of the Total Carbon Column Observing Network (TCCON) and the Network for the Detection of Atmospheric Composition Change (NDACC, produced within the framework of the MUSICA project). Even though the time overlap among the available data is not yet ideal, we determined a mean negative bias in SCIAMACHY δD of −35 ± 30‰ compared to TCCON and −69 ± 15‰ compared to MUSICA (the uncertainty indicating the station-to-station standard deviation). The bias shows a latitudinal dependency, being largest (∼ −60 to −80‰) at the highest latitudes and smallest (∼ −20 to −30‰) at the lowest latitudes. We have tested the impact of an offset correction to the SCIAMACHY HDO and H2O columns. This correction leads to a humidity- and latitude-dependent shift in δD and an improvement of the bias by 27‰, although it does not lead to an improved correlation with the FTS measurements nor to a strong reduction of the latitudinal dependency of the bias. The correction might be an improvement for dry, high-altitude areas, such as the Tibetan Plateau and the Andes region. For these areas, however, validation is currently impossible due to a lack of ground stations. The mean standard deviation of single-sounding SCIAMACHY–FTS differences is ∼ 115‰, which is reduced by a factor ∼ 2 when we consider monthly means. When we relax the strict matching of individual measurements and focus on the mean seasonalities using all available FTS data, we find that the correlation coefficients between SCIAMACHY and the FTS networks improve from 0.2 to 0.7–0.8. Certain ground stations show a clear asymmetry in δD during the transition from the dry to the wet season and back, which is also detected by SCIAMACHY. This asymmetry points to a transition in the source region temperature or location of the water vapour and shows the added information that HDO/H2O measurements provide when used in combination with variations in humidity.


2018 ◽  
Vol 97 (7) ◽  
pp. 208-212 ◽  
Author(s):  
T. Edward Imbery ◽  
Brian D. Nicholas ◽  
Parul Goyal

The study objective was to analyze Medicare payment data to otologists compared to otolaryngologists, using the publicly released Centers for Medicare and Medic-aid Services dataset. Charges, payments, and common Current Procedural Terminology codes were obtained. Otology providers were selected from the roster of the American Otological Society. Descriptive statistics and unequal variance two-tailed t tests were used for comparisons between otologists (n = 147) and otolaryngologists (n = 8,318). The mean overall submitted charge was $204,851 per otology provider and was $211,209 per other otolaryngology providers (non-otologists) (p = 0.92). The mean payment to otologists was $56,191 (range: $297 to $555,274, standard deviation [SD] ±$68,540) and significantly lower (p = 0.005) than $77,275 to otolaryngologists (range: $94 to $2,123,900, SD ±$86,423). The mean submitted charge-to-payment ratio (fee multiplier) per otology provider was 3.87 (range 1.50 to 9.10, SD ±1.70), which was significantly higher (p < 0.0001) than the ratio for otolaryngologists (mean 2.91; range: 1.25 to 17.51, SD ±1.22). Office visit evaluation and management (E&M) codes made up the majority in terms of use and payments. Interestingly, allergy-based services comprised a substantial amount of repeat use among a small subset of otologists. Audiology services were billed by a similar percentage of otologists and other otolaryngologists (52%), but otologists received a significantly higher overall payment for these services.


Author(s):  
Mark J. DeBonis

One classic example of a binary classifier is one which employs the mean and standard deviation of the data set as a mechanism for classification. Indeed, principle component analysis has played a major role in this effort. In this paper, we propose that one should also include skew in order to make this method of classification a little more precise. One needs a simple probability distribution function which can be easily fit to a data set and use this pdf to create a classifier with improved error rates and comparable to other classifiers.


2018 ◽  
Author(s):  
Khairatul Aini ◽  
Halim Tamuri ◽  
Syafrimen Syafril

This aim of this study is to examine the level of the computer utilization among Islamic education teachers during teaching and learning at SMAN Bandar Padang. The explanatory mixed method design was used. This study was conducted into two phases, the first phase used quantitative approach and the second phase used qualitative approach. There were 64 Islamic education teachers involved in the study. The first data were processed and analyzed using SPSS for windows. The second data was processed and analyzed using NVIVO 7. The first phase answered three research questions and found out : The skill level using computer was on the low level with total mean score 2.32 and Standard deviation 0,22, the attitude of the Islamic education teachers towards using computer utilization in teaching and learning were very enthusiastic and positive with the total mean score 3, 95 and standard deviation 0, 36 and the problems faced by the teachers in using computer utilization during teaching and learning process was in average level with the mean score 3, 07 and standard deviation 0, 35. The second phase showed that the problems faced by the teachers in using computer were : the lack of self-skill in application of the computer, the computer facility was inadequate, the school did not have adequate software of computer course for Islamic education, the difficulties in providing the software in Islamic education course based on computer and the work overload faced by the teachers at school.


2021 ◽  
Vol 6 (2) ◽  
pp. 325-333
Author(s):  
Toni Indrayadi

This study examines each reading comprehension indicator and reading comprehension in general. Forty-five first-semester students of the English Education Department at Public Islamic Institute Jambi, Indonesia, which comprised of twenty-one males and twenty-four females voluntarily participated in this study. They were purposively selected as the sample of the research. This study used six indicators of reading comprehension and reading comprehension in general data. The descriptive quantitative approach was employed in this study as the research methodology. The descriptive statistic of SPSS 23 was applied to analyze the mean and standard deviation of six reading comprehension and reading comprehension in general. The criteria referenced interpretation and norm-referenced interpretation investigated interval of six reading comprehension and reading comprehension in general. The result of descriptive statistics and criteria referenced interpretation, and norm-referenced interpretation analysis showed five reading comprehension indicators in terms of author purpose, topic details, reference, and vocabulary in context in moderate criteria, except the main idea in which in high criteria. The reading comprehension was in moderate criteria similar to five reading comprehension indicators. Therefore, English lecturers hope to select an appropriate strategy and be well prepared to teach reading.


2021 ◽  
Vol 6 (38) ◽  
pp. 209-222
Author(s):  
Nan Poh Lai ◽  
Mei Kin Tai ◽  
Abdull Kareem Omar

The study aims to identify the level of Teacher Attitudes Toward Change (TATC) among teachers in National Schools (NS) at Peninsular Malaysia. In particular, this study aims to identify TATC levels based on Cognitive, Affective, and Behavioral dimensions. This study applied a quantitative method using a survey approach. A total of 384 teachers of NS were involved in this study. The study employed descriptive statistics by obtaining the mean score and standard deviation. The findings of the study revealed that the TATC’s level as a whole was located in the quadrants of Acceptance (Mean=4.38, SD=.57). In addition, the findings of the study showed that the Cognitive (Mean=4.53, SD=.71), Affective (Mean=4.18, SD=.68), and Behavioral dimension (Mean=4.43, SD=.59) of TATC were also located at the quadrant of Acceptance. The study contributes to the teachers' understanding of the importance of their attitudes toward changes in the school. The findings of the study also provide information to the related parties to plan appropriate training programmes to enhance TATC in school.


2006 ◽  
Vol 6 (3) ◽  
pp. 831-846 ◽  
Author(s):  
X. Calbet ◽  
P. Schlüssel

Abstract. The Empirical Orthogonal Function (EOF) retrieval technique consists of calculating the eigenvectors of the spectra to later perform a linear regression between these and the atmospheric states, this first step is known as training. At a later stage, known as performing the retrievals, atmospheric profiles are derived from measured atmospheric radiances. When EOF retrievals are trained with a statistically different data set than the one used for retrievals two basic problems arise: significant biases appear in the retrievals and differences between the covariances of the training data set and the measured data set degrade them. The retrieved profiles will show a bias with respect to the real profiles which comes from the combined effect of the mean difference between the training and the real spectra projected into the atmospheric state space and the mean difference between the training and the atmospheric profiles. The standard deviations of the difference between the retrieved profiles and the real ones show different behavior depending on whether the covariance of the training spectra is bigger, equal or smaller than the covariance of the measured spectra with which the retrievals are performed. The procedure to correct for these effects is shown both analytically and with a measured example. It consists of first calculating the average and standard deviation of the difference between real observed spectra and the calculated spectra obtained from the real atmospheric state and the radiative transfer model used to create the training spectra. In a later step, measured spectra must be bias corrected with this average before performing the retrievals and the linear regression of the training must be performed adding noise to the spectra corresponding to the aforementioned calculated standard deviation. This procedure is optimal in the sense that to improve the retrievals one must resort to using a different training data set or a different algorithm.


2011 ◽  
Vol 197-198 ◽  
pp. 1626-1630
Author(s):  
Chih Chung Ni

Three sets of fatigue crack growth data tested under different constant-amplitude loads for CT specimens made of 2024 T-351 aluminum alloy are released, and the analyzed results presented in this study are specially emphasized on the correlation between statistics of these scattered fatigue data and their applied loads. Investigating the scatters of initiation cycle and specimen life, it was found that both the mean and standard deviation of initiation cycle, as well as the mean and standard deviation of specimen life, decrease as applied stress amplitude increases. Moreover, the negatively linear correlation between the median values of initiation cycle and applied stress amplitudes presented in linear scale, and between the median values of specimen life and applied stress amplitudes presented in logarithmic scale were found, where the initiation cycle and specimen life are all best depicted by normal distributions for all three data sets. Finally, the mean of intercepts and mean of exponents of Paris-Erdogan law for each data set were studied, and it was found that the mean of intercepts decreases greatly as applied stress amplitude increases, while the mean of exponents decreases slightly.


Author(s):  
Eddy Alecia Love Lavalais ◽  
Tayler Jackson ◽  
Purity Kagure ◽  
Myra Michelle DeBose ◽  
Annette McClinton

Background: Identifying nurse burnout to be of significance, as it directly impacts work ethic, patient satisfaction, safety and best practice. Nurses are more susceptible to fatigue and burnout, due to the fact of working in highly stressful environments and caring for people in their most vulnerable state. It is imperative to pinpoint and alleviate potential aspects that can lead to nurse burnout. Research Hypothesis: Educating nurses on recognizing factors influencing nurse burnout and offering effective interventions to combat stress, will lead to better coping and adaptation skills; hence, decreasing the level of nurse fatigue and burnout. Assisting nurses to be cognizant of the symptoms of stress and nurse burnout will lead to the development of positive adaptive mechanisms. However, nurses without this recognition, tend to develop maladaptive psychological skills. Research Methodology: The quality improvement project gathered data on factors influencing burnout via Maslach Burnout Inventory Tool (MBI). MBI is the most commonly used instrument in measuring burnout, by capturing three subscales of burnout: emotional exhaustion (EE), depersonalization (DP) and personal accomplishment (PA). Results: From a sample of 31 graduate nursing (employed) students, MBI survey was administered via survey monkey. Gathered data (n=31), via descriptive statistics and standard deviation, represented the extent of deviation for the nursing population as a whole. The quality improvement study revealed the standard deviation (SD) for emotional exhaustion, a low SD of 0.3; indicating that data points appear to be closer to the mean (expected value) of the emotional exhaustion data set. Additionally, depersonalization data showed SD values that were widely spread; however, yielding a low SD of 0.42 from the mean on depersonalization. Lastly, higher scores derived from Maslach’s Burnout Inventory tool suggests increased levels of personal accomplishment. Thus, data set revealed lower levels of depersonalization in regards to sample size.  Moreover, Pearson correlation coefficient (Pearson r) identified a positive correlation between independent variable of stress levels and factors influencing nurse burnout, with combined teaching of ways to combat stress in the workplace. Effectiveness of this was reported by ninety-eight percent (98%) of participants. Significance: This study maintains that limited emotional exhaustion, a strong sense of identity and achieving personal accomplishments minimizes burn out.


Sign in / Sign up

Export Citation Format

Share Document