Infant Nipple Feeding Assessment and Communication Tool: Face Validity and Inter-Rater Reliability Testing

Author(s):  
James Maryman ◽  
Staci H. Sullivan ◽  
Julie V. Duet ◽  
Suzette G. Fontenot ◽  
Mary Johnson ◽  
...  
2018 ◽  
Vol 33 (4) ◽  
pp. 527-536 ◽  
Author(s):  
Leonel A. do Nascimento ◽  
Ligia F. Fonseca ◽  
Claudia B. dos Santos

BMJ Open ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. e035239 ◽  
Author(s):  
Gillian Ray-Barruel ◽  
Marie Cooke ◽  
Vineet Chopra ◽  
Marion Mitchell ◽  
Claire M Rickard

ObjectiveTo describe the clinimetric validation of the I-DECIDED tool for peripheral intravenous catheter assessment and decision-making.Design and settingI-DECIDED is an eight-step tool derived from international vascular access guidelines into a structured mnemonic for device assessment and decision-making. The clinimetric evaluation process was conducted in three distinct phases.MethodsInitial face validity was confirmed with a vascular access working group. Next, content validity testing was conducted via online survey with vascular access experts and clinicians from Australia, the UK, the USA and Canada. Finally, inter-rater reliability was conducted between 34 pairs of assessors for a total of 68 peripheral intravenous catheter (PIVC) assessments. Assessments were timed to ensure feasibility, and the second rater was blinded to the first’s findings. Content validity index (CVI), mean item-level CVI (I-CVI), internal consistency, mean proportion of agreement, observed and expected inter-rater agreements, and prevalence-adjusted bias-adjusted kappas (PABAK) were calculated. Ethics approvals were obtained from university and hospital ethics committees.ResultsThe I-DECIDED tool demonstrated strong content validity among international vascular access experts (n=7; mean I-CVI=0.91; mean proportion of agreement=0.91) and clinicians (n=11; mean I-CVI=0.93; mean proportion of agreement=0.94), and high inter-rater reliability in seven adult medical-surgical wards of three Australian hospitals. Overall, inter-rater reliability was 87.13%, with PABAK for each principle ranging from 0.5882 (‘patient education’) to 1.0000 (‘document the decision’). Time to complete assessments averaged 2 min, and nurse-reported acceptability was high.ConclusionThis is the first comprehensive, evidence-based, valid and reliable PIVC assessment and decision tool. We recommend studies to evaluate the outcome of implementing this tool in clinical practice.Trial registration number12617000067370


2018 ◽  
Vol 12 (2) ◽  
pp. 57-66 ◽  
Author(s):  
Deanna Gallichan ◽  
Carol George

Purpose The purpose of this paper is to assess whether the Adult Attachment Projective (AAP) Picture System is a reliable and face valid measure of internal working models of attachment in adults with intellectual disabilities (ID). Design/methodology/approach The AAPs of 20 adults with ID were coded blind by two reliable judges and classified into one of four groups: secure, dismissing, preoccupied, or unresolved. Inter-rater reliability was calculated using κ. Six participants repeated the assessment for test-retest reliability. Two independent experts rated ten cases on the links between the AAP analysis and the clinical history. Findings There was significant agreement between AAP judges, κ=0.677, p<0.001. Five out of six participants showed stability in their classifications over time. The majority of expert ratings were “good” or “excellent”. There was a significant inter-class correlation between raters suggesting good agreement between them r=0.51 (p<0.05). The raters’ feedback suggested that the AAP had good clinical utility. Research limitations/implications The inter-rater reliability, stability, face validity, and clinical utility of the AAP in this population is promising. Further examination of these findings with a larger sample of individuals with ID is needed. Originality/value This is the first study attempting to investigate the reliability and validity of the AAP in this population.


2021 ◽  
Vol 22 (1) ◽  
pp. 118-131
Author(s):  
Milad Abolhasani ◽  
◽  
Ashraf Karbalaee Nouri ◽  
Enayatollah Bakhshi ◽  
Milad Abolhasani ◽  
...  

Objective: This study aimed to translate the Assessment of Interpersonal Problem-Solving Skills (AIPSS) into Persian and to evaluate the validity and reliability of the Persian version of AIPSS to use for adults with schizophrenia. Materials & Methods: In this methodological study, the translation process was performed according to the International Quality of Life Assessment (IQOLA) protocol. The face validity of the translated AIPSS was determined based on the opinions of experts and The Content Validity Index (CVI) and Content Validity Ratio (CVR) were also calculated for each item. The Persian version of the test was performed on 52 patients with schizophrenia disorders at Tehran’s Razi Mental Hospital; they were selected using a convenience sampling method. Cronbach’s alpha coefficient was used to evaluate internal consistency. Inter-rater reliability was determined by the Intraclass Correlation Coefficient (ICC). A retest was complete on 15 patients with 2 weeks interval and ICC was used to determine the test-retest reliability. Results: Face validity was confirmed by the experts’ opinions. The Content Validity Ratio (CVR) and the Content Validity Index (CVI) were equal to one for all scenes. Cronbach's alpha coefficient for all scales was ranged between 0.511 and 0.821. The ICC in all scales were more than 0.98 for inter-rater reliability. In calculating test-retest reliability, the ICC for all scales ranged 0.733-0.893. Conclusion: Results show that the Persian version of AIPSS has acceptable face validity, content validity, internal consistency, inter-rater reliability test-retest reliability. Therefore, this instrument can be used in clinical fields and research studies to assess the social skills of Iranian patients with schizophrenia.


2015 ◽  
Vol 78 (9) ◽  
pp. 563-569
Author(s):  
Marie Cederfeldt ◽  
Gunnel Carlsson ◽  
Synneve Dahlin–Ivanoff ◽  
Gunilla Gosman–Hedstrom

2020 ◽  
Vol 4 (Supplement_2) ◽  
pp. 1346-1346
Author(s):  
Margaret Samson ◽  
Sarah Amin ◽  
Karen McCurdy ◽  
Amy Moore ◽  
Alison Tovar

Abstract Objectives Home-based interventions have used video-recorded meals to assess feeding practices, yet no studies have used videos to directly provide feedback on the practices observed. This study describes the development and initial validation of a video coding tool to assess feeding practices observed in video-recorded family meals in order to provide feedback to caregivers. Methods The tool with operational definitions was developed based on the previous literature and other tools that capture caregiver feeding practices. To assess face validity, a sample of child feeding experts (n = 6) reviewed the tool and completed an 8-item online survey. Usability and content were assessed on a scale of 0–100, with 100 representing high usability and importance, respectively. The tool was modified based on expert feedback and used by trained research assistants to code 10 video-recorded family meals. Inter-rater reliability (IRR) was calculated and 3 videos were then randomly selected and coded by each research assistant at two different time points to assess test re-test reliability. Results Expert ratings of tool usability (81.83 ± 11.67) and content (87.67 ± 13.98) included feedback regarding the need to expand the operational definitions to better code practices, in particular with regards to pressure and encouragement. They also suggested merging practices, such as nutrition education and reasoning, which may be difficult to discern in an observational setting. Average IRR was 86.4% with pressure (75.1%) and encouragement (66.9%) having the lowest rates of agreement between coders. Limited choices (100%) and restriction (95.9%) had the highest rates of agreement. For test re-test reliability, the average agreement between the two timepoints was 80.0%. Conclusions Following the feedback from experts, the face validity of the developed tool improved, and the inter-rater reliability and test re-test reliability of the tool was acceptable. Future studies should focus on the expansion of operational definitions and training efforts to further improve inter-rater and test re-test reliabilities. Funding Sources This work was supported by the National Institutes on Health National Heart, Lung and Blood Institute [Tovar/R34HL 140,229].


2006 ◽  
Vol 3 (s1) ◽  
pp. S190-S207 ◽  
Author(s):  
Brian E. Saelens ◽  
Lawrence D. Frank ◽  
Christopher Auffrey ◽  
Robert C. Whitaker ◽  
Hillary L. Burdette ◽  
...  

Background:Reliable and comprehensive measurement of physical activity settings is needed to examine environment-behavior relations.Methods:Surveyed park professionals (n = 34) and users (n = 29) identified park and playground elements (e.g., trail) and qualities (e.g., condition). Responses guided observational instrument development for environmental assessment of public recreation spaces (EAPRS). Item inter-rater reliability was evaluated following observations in 92 parks and playgrounds. Instrument revision and further reliability testing were conducted with observations in 21 parks and 20 playgrounds.Results:EAPRS evaluates trail/path, specific use (e.g., picnic), water-related, amenity (e.g., benches), and play elements, and their qualities. Most EAPRS items had good-excellent reliability, particularly presence/number items. Reliability improved from the original (n = 1088 items) to revised (n = 646 items) instrument for condition, coverage/shade, and openness/visibility items. Reliability was especially good for play features, but cleanliness items were generally unreliable.Conclusions:The EAPRS instrument provides comprehensive assessment of parks’ and playgrounds’ physical environment, with generally high reliability.


Sign in / Sign up

Export Citation Format

Share Document