scholarly journals Clinical Pharmacy Activities Documented (ClinPhADoc): Development, Reliability and Acceptability of a Documentation Tool for Community Pharmacists

Pharmacy ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 162
Author(s):  
Nour Hamada ◽  
Patricia Quintana Bárcena ◽  
Karen Alexandra Maes ◽  
Olivier Bugnon ◽  
Jérôme Berger

Documentation of community pharmacists’ clinical activities, such as the identification and management of drug-related problems (DRPs), is recommended. However, documentation is not systematic in Swiss community pharmacies, and relevant information about DRPs, such as consequences or involved partners, is frequently missing. This study aims to evaluate the interrater and test-retest reliability, appropriateness and acceptability of the Clinical Pharmacy Activities Documented (ClinPhADoc) tool. Ten community pharmacists participated in the study. Interrater reliability coefficients were computed using 24 standardized cases. One month later, test-retest reliability was assessed using 10 standardized cases. To assess the appropriateness, pharmacists were asked to document clinical activities in their own practice using ClinPhADoc. Acceptability was assessed by an online satisfaction survey. Kappa coefficients showing a moderate level of agreement (>0.40) were observed for interrater and test-retest reliability. Pharmacists were able to document 131 clinical activities. The good level of acceptability and brief documentation time (fewer than seven minutes) indicate that ClinPhADoc is well-suited to the community pharmacy setting. To optimize the tool, pharmacists proposed developing an electronic version. These results support the reliability and acceptance of the ClinPhADoc tool.

2021 ◽  
pp. 179-180
Author(s):  
Levy A. de-Oliveira ◽  
Matheus V. Matos ◽  
Iohanna G. S. Fernandes ◽  
Diêgo A. Nascimento ◽  
Marzo E. da Silva-Grigoletto

A new technology (BlazePod™) that measures response time (RT) is currently on the market and has been used by strength and conditioning professionals. Nevertheless, to trust in the measurement, before the use of a new device to measure any outcome in the research or clinical setting, a reliability analysis of its measurement must be established (Koo and Li, 2016). Hence, we assessed the test-retest reliability (repeatability) of the BlazePod™ (Play Coyotta Ltd., Aviv, Israel) technology during a pre-defined activity to provide information about the level of agreement and the magnitude of errors incurred when using the technology. This information can assist practitioners and researchers in the use of BlazePod™ technology. We recruited 24 physically active young adults (age = 23.9 ± 4.0 years; height = 1.67 ± 0.09 m; body mass = 68.2 ± 13.1 kg), who were free of injuries, and any orthopedic, or cardiorespiratory diseases. Participants reported to the laboratory on two occasions, separated by one week. One week before, participants performed a familiarization session with the instrument. During the first session, the one-leg balance activity (OLBA) was performed. This activity was chosen randomly among all BlazePod™ pre-defined activities. We conducted all sessions in a physiology laboratory at the same time for each participant and under similar environmental conditions (~23° C; ~60% humidity). The OLBA consisted of a unipedal balance activity performed with four pods arranged in a square on the floor. Participants stood up in the center of the square, and the OLBA aim was to tap out as many lights as possible with the dominant foot during 30 seconds. The system lighted up in a random order not known by the participants neither the researchers. The distance between the Pods was the individual lower limb length. Three trials were performed. The best value obtained was recorded. A one-minute rest interval between all trials was given. The total number of taps and average RT of all taps in the OLBA were recorded for further analysis. Data are presented as mean ± SD or 95% confidence interval (CI). We confirmed the normal data distribution using the Shapiro-Wilk test. A paired t-test, Cohen’s d effect size (ES) and its 95% CI were calculated to assess the magnitude of the mean difference between sessions. The interpretation of the ES was: trivial (<0.20), small (0.20-0.59), moderate (0.60-1.19), large (1.2-2.0) and very large (>2.0) effect (Hopkins et al., 2009). The intraclass correlation coefficient (ICC) and its 95% CI was used to assess the reliability based on a single measurement, absolute-agreement, two-way mixed-effects model. The ICC value was interpreted as follows: poor (<0.5), moderate (0.5-0.75), good (0.75-0.9), and excellent (>0.9) reliability (Koo and Li, 2016). We also calculated the standard error of measurement (SEM), the coefficient of variation (CV), the smallest detectable change (SDC), the level of agreement between sessions by a Bland-Altman plot, the systematic bias, and its 95% limits of agreement (LoA = bias ± 1.96 SD) (Bland and Altman, 1986). We observed a small to moderate increase between sessions for the number of taps (Day 1 = 20 ± 3 taps, Day 2 = 22 ± 4 taps; t(23) = -4.121; p < 0.001; ES = 0.55, 95% CI = 0.43 to 0.67) and a trivial to small decrease for the RT (Day 1 = 1418 ± 193 ms, Day 2 = 1358 ± 248 ms; t(23) = 1.721; p = 0.099; ES = -0.27, 95% CI = -0.15 to -0.38 CI). All reliability indexes for both outcome measures are shown in Table 1. Moderate to excellent levels of reliability were found by the ICC (95% CI) values and acceptable reliability by the CV for both measures. Bland-Altman plots are depicted in Figure 1. The systematic bias that we found showed that on average in the second day, participants achieved two taps more than the first day and were 59 ms faster than the first day. The LoA showed that the number of taps measured in the first day might be 7 units below or 3 units above Day 2. Besides, the RT measured in Day 1 might be 272 ms below or 391 ms above Day 2. In conclusion, the BlazePod™ technology provides reliable information during its OLBA in physically active young adults. We considered the measurement error as acceptable for practical use since low systematic biases and errors of measurement were reported in this study, besides a moderate ICC and excellent CV. These results suggest that practitioners can use the information provided by the BlazePod™ technology to monitor performance changes during cognitive training and to evaluate the effects of a training intervention.


2018 ◽  
Vol 46 (8) ◽  
pp. 2004-2010 ◽  
Author(s):  
Phillip R. Worts ◽  
Philip Schatz ◽  
Scott O. Burkhart

Background: The Vestibular/Ocular Motor Screening (VOMS) and King-Devick (K-D) test are tools designed to assess ocular or vestibular function after a sport-related concussion. Purpose: To determine the test-retest reliability and rate of false-positive results of the VOMS and K-D test in a healthy athlete sample. Study Design: Cohort study (diagnosis); Level of evidence, 2. Methods: Forty-five healthy high school student-athletes (mean age, 16.11 ± 1.43 years) completed self-reported demographics and medical history and were administered the VOMS and K-D test during rest on day 1 (baseline). The VOMS and K-D test were administered again once during rest (prepractice) and once within 5 minutes of removal from sport practice on day 2 (removal). The Borg rating of perceived exertion scale was administered at removal. Intraclass correlation coefficients were used to determine test-retest reliability on the K-D test and the average near point of convergence (NPC) distance on the VOMS. Level of agreement was used to examine VOMS symptom provocation over the 3 administration times. Multivariate base rates were used to determine the rate of false-positive results when simultaneously considering multiple clinical cutoffs. Results: Test-retest reliability of total time on the K-D test (0.91 [95% CI, 0.86-0.95]) and NPC distance (0.91 [95% CI, 0.85-0.95]) was high across the 3 administration times. Level of agreement ranged from 48.9% to 88.9% across all 3 times for the VOMS items. Using established clinical cutoffs, false-positive results occurred in 2% of the sample using the VOMS at removal and 36% using the K-D test. Conclusion: The VOMS displayed a false-positive rate of 2% in this high school student-athlete cohort. The K-D test’s false-positive rate was 36% while maintaining a high level of test-retest reliability (0.91). Results from this study support future investigation of VOMS administration in an acutely injured high school athletic sample. Going forward, the VOMS may be more stable than other neurological and symptom report screening measures and less vulnerable to false-positive results than the K-D test.


Author(s):  
Matthew L. Hall ◽  
Stephanie De Anda

Purpose The purposes of this study were (a) to introduce “language access profiles” as a viable alternative construct to “communication mode” for describing experience with language input during early childhood for deaf and hard-of-hearing (DHH) children; (b) to describe the development of a new tool for measuring DHH children's language access profiles during infancy and toddlerhood; and (c) to evaluate the novelty, reliability, and validity of this tool. Method We adapted an existing retrospective parent report measure of early language experience (the Language Exposure Assessment Tool) to make it suitable for use with DHH populations. We administered the adapted instrument (DHH Language Exposure Assessment Tool [D-LEAT]) to the caregivers of 105 DHH children aged 12 years and younger. To measure convergent validity, we also administered another novel instrument: the Language Access Profile Tool. To measure test–retest reliability, half of the participants were interviewed again after 1 month. We identified groups of children with similar language access profiles by using hierarchical cluster analysis. Results The D-LEAT revealed DHH children's diverse experiences with access to language during infancy and toddlerhood. Cluster analysis groupings were markedly different from those derived from more traditional grouping rules (e.g., communication modes). Test–retest reliability was good, especially for the same-interviewer condition. Content, convergent, and face validity were strong. Conclusions To optimize DHH children's developmental potential, stakeholders who work at the individual and population levels would benefit from replacing communication mode with language access profiles. The D-LEAT is the first tool that aims to measure this novel construct. Despite limitations that future work aims to address, the present results demonstrate that the D-LEAT represents progress over the status quo.


1982 ◽  
Vol 25 (4) ◽  
pp. 521-527 ◽  
Author(s):  
David C. Shepherd

In 1977, Shepherd and colleagues reported significant correlations (–.90, –.91) between speechreading scores and the latency of a selected negative peak (VN 130 measure) on the averaged visual electroencephalic wave form. The primary purpose of this current study was to examine the stability, or repeatability, of this relation between these cognitive and neurophysiologic measures over a period of several months and thus support its test-retest reliability. Repeated speechreading word and sentence scores were gathered during three test-retest sessions from each of 20 normal-hearing adults. An average of 56 days occurred from the end of one to the beginning of another speechreading sessions. During each of four other test-retest sessions, averaged visual electroencephalic responses (AVER s ) were evoked from each subject. An average of 49 clays intervened between AVER sessions. Product-moment correlations computed among repeated word scores and VN l30 measures ranged from –.61 to –.89. Based on these findings, it was concluded that the VN l30 measure of visual neural firing time is a reliable correlate of speech-reading in normal-hearing adults.


2000 ◽  
Vol 16 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Hans Ottosson ◽  
Martin Grann ◽  
Gunnar Kullgren

Summary: Short-term stability or test-retest reliability of self-reported personality traits is likely to be biased if the respondent is affected by a depressive or anxiety state. However, in some studies, DSM-oriented self-reported instruments have proved to be reasonably stable in the short term, regardless of co-occurring depressive or anxiety disorders. In the present study, we examined the short-term test-retest reliability of a new self-report questionnaire for personality disorder diagnosis (DIP-Q) on a clinical sample of 30 individuals, having either a depressive, an anxiety, or no axis-I disorder. Test-retest scorings from subjects with depressive disorders were mostly unstable, with a significant change in fulfilled criteria between entry and retest for three out of ten personality disorders: borderline, avoidant and obsessive-compulsive personality disorder. Scorings from subjects with anxiety disorders were unstable only for cluster C and dependent personality disorder items. In the absence of co-morbid depressive or anxiety disorders, mean dimensional scores of DIP-Q showed no significant differences between entry and retest. Overall, the effect from state on trait scorings was moderate, and it is concluded that test-retest reliability for DIP-Q is acceptable.


2013 ◽  
Author(s):  
Kristen M. Dahlin-James ◽  
Emily J. Hennrich ◽  
E. Grace Verbeck-Priest ◽  
Jan E. Estrellado ◽  
Jessica M. Stevens ◽  
...  

2018 ◽  
Vol 30 (12) ◽  
pp. 1652-1662 ◽  
Author(s):  
Sophie J. M. Rijnen ◽  
Sophie D. van der Linden ◽  
Wilco H. M. Emons ◽  
Margriet M. Sitskoorn ◽  
Karin Gehring

Sign in / Sign up

Export Citation Format

Share Document