scholarly journals Development and Validation of a Food Literacy Assessment Tool for Community-Dwelling Elderly People

Author(s):  
Hyeona So ◽  
Dahyun Park ◽  
Mi-Kyung Choi ◽  
Young-Sun Kim ◽  
Min-Jeong Shin ◽  
...  

Food literacy refers to the knowledge, skills, and attitudes required for individuals to choose foods that promote health. As the rate of diet-related diseases increases, food literacy is becoming more important. However, there are no tools available to evaluate food literacy among the Korean elderly. We derived 547 questions from a literature review and, after three rounds of Delphi surveys, selected 33 preliminary questions. We calculated the content validity ratio of the questions and applied a face validity procedure. We then selected 32 questions, assessed their validity, and distributed them as a questionnaire to 205 elderly people. We then conducted exploratory factor analysis (EFA) to determine the validity of the questionnaire and used an internal consistency index (Cronbach’s α coefficient) to determine reliability. Based on the factor analysis, 13 questions were selected, distributed among three factors, and evaluated using the Kaiser–Meyer–Olkin (KMO) and Bartlett sphericity tests. The factor analysis showed that KMO was 0.872, which is a highly acceptable score, and the Bartlett sphericity test was χ2 = 1,374.69 at p = 0.00. The food literacy questionnaire developed in this study will likely be helpful for improving the healthcare of elderly people.

2021 ◽  
Vol 5 (Supplement_2) ◽  
pp. 876-876
Author(s):  
Austin Katona ◽  
Caroline Riewe ◽  
Angela Bruzina ◽  
Francoise Knox-Kazimierczuk ◽  
Abigail Peairs

Abstract Objectives Food literacy, the interrelated knowledge, skills, and behaviors needed to successfully navigate a complex food-choice environment, has yet to be formally explored in athletes. However, it is important for this population to understand and apply specialized food-related recommendations to optimize health and performance outcomes. The goal of this study was to develop and test the validity of the Sports Food Literacy Assessment Tool (SFLAT) to evaluate food literacy concepts relevant to National Collegiate Athletic Association (NCAA) athletes. Methods The SFLAT was developed based on current food literacy definitions, items from validated food literacy and sports nutrition knowledge assessment tools, and current sports nutrition recommendations. Content validity was assessed using a 2-round modified Delphi expert panel of Registered Dietitians (RDs, n = 16) who gave comments and rated items based on importance. Content Validity Ratios (CVRs) were calculated for each item based on these ratings and used along with participant comments to improve items after each round of rating. In-depth interviews (n = 5) and written comments (n = 6) from collegiate athletes were used to assess face validity, and feedback was used to improve item wording and clarity. Results The first draft of the SFLAT contained demographic, food frequency, behavior frequency, self-efficacy, and nutrition knowledge questions. The expert panel of RDs had 2 to 27 years of experience working with NCAA Division I athletes. Their expert feedback led to addition, adjustment, and removal of items, and an increase in average CVR of the SFLAT from 0.58 to 0.68. Comprehension was high among face validity participants, all of which were NCAA Division 1 collegiate athletes. Comments were used to make minimal wording changes and combine two questions. The final draft of the SFLAT contained 108 items. Conclusions The SFLAT has adequate content and face validity and, with further reliability tests, may be used to identify gaps in food choice-related knowledge, skills, and behaviors specific to collegiate athletes, which can inform the development of more effective nutrition interventions in this population. Funding Sources No funding.


2019 ◽  
Vol 3 (Supplement_1) ◽  
Author(s):  
Kathryn Hitchcock ◽  
Debra Krummel ◽  
Seung-Yeon Lee

Abstract Objectives The objective of this study was to develop an instrument to assess food literacy and to test its face validity with food pantry clients. Methods The Food Literacy Assessment Tool (FLAT), which targets food insecure populations, was developed using the framework of Vidgen's food literacy after intensive literature review. FLAT assesses knowledge, self-efficacy, and practices of the four components of food literacy: planning, managing & selecting, preparing, and eating. A total of 64 items were included, with four subscales based on common attributes. Nine items measure food consumption behavior, 16 measure knowledge related to food literacy, 18 measure behavior of the four components of food literacy, and 21 measure self-efficacy of those practices of food literacy. The face validity of FLAT was tested by conducting cognitive interviews with 10 food pantry clients from an urban food pantry. Semi-structured, open-ended questions were used for the cognitive interviews and probing questions were based on common sources of errors: lack of clarity of instructions and wording of items, inappropriate assumptions or bias on the target populations, and inadequate response options. Results The majority of participants were female (n = 7), with some high school education (n = 6), and a household income less than $10,000/year (n = 7). Major sources of errors included unclear diction, inappropriate response options, and assumptions about the target population. For self-efficacy questions, participants mentioned the wording “I am confident that I can” was more appropriate than “I can” because they answered the item with “I can” based on what they actually did rather than based on their confidence level. For questions on reading Nutrition Facts labels and unit price, some participants suggested to add a “Don't know” response option because they did not know how to read them. Assumption errors were identified in questions on knowledge, preparation/cooking, consumption, and self-efficacy related to dairy products and meat because not all participants consumed meat and/or dairy products. Conclusions The findings of the cognitive interviews provided feedback which improved face validity of FLAT by increasing the clarity of items and reducing inappropriate assumptions and bias. Funding Sources The Center for Clinical and Translational Science and Training, Community Health Grant.


2001 ◽  
Vol 27 (6) ◽  
pp. 857-864 ◽  
Author(s):  
Charlotte Reese Nath ◽  
Shirley Theriot Sylvester ◽  
Van Yasek ◽  
Erdogan Gunel

2018 ◽  
Vol 26 (1) ◽  
pp. 79
Author(s):  
Paul Graham Kebble

<p>The C-Test as a tool for assessing language competence has been in existence for nearly 40 years, having been designed by Professors Klein-Braley and Raatz for implementation in German and English. Much research has been conducted over the ensuing years, particularly in regards to reliability and construct validity, for which it is reported to perform reliably and in multiple languages. The author engaged in C-Test research in 1995 focusing on concurrent, predictive and face validity. Through this research, the author developed an appreciation for the C-Test assessment process particularly with the multiple cognitive and linguistic test-taking strategies required. When digital technologies became accessible, versatile and societally integrated, the author believed the C-Test would function well in this environment. This conviction prompted a series of investigations into the development and assessment of a digital C-Test design to be utilised in multiple linguistic settings. This paper describes the protracted design process, concluding with the publication of mobile apps.</p>


2021 ◽  
Vol 5 (Supplement_1) ◽  
pp. 789-789
Author(s):  
Mariana Wingood ◽  
Salene Jones ◽  
Nancy Gell ◽  
Denise Peters ◽  
Jennifer Brach

Abstract Addressing physical activity (PA) barriers is an essential component of increasing PA among the 56-73% of community-dwelling adults 50 years and older who are not performing the recommended 150 minutes of moderate-to-vigorous PA. As there is no feasible, multi-factorial tool to assess PA barriers among this population, we developed and validated a PA barrier assessment tool called the Inventory of Physical Activity Barriers (IPAB). We collected cross-sectional data on 503 adults (mean age 70.1), with 79 participants completing the scale twice for test-retest reliability and 64 completing a cross-over design examining the ability to use two administration formats interchangeably. Our analyses consisted of exploratory and confirmatory factor analysis, Cronbach alpha, intraclass correlation coefficient, Bland-Altman Plot, and t-tests. Using factor analysis, we identified and confirmed an eight-factor solution consisting of 27 items. The 27-item IPAB is internally consistent (alpha= 0.91), has a high test-retest reliability (intraclass correlation coefficient=0.99), and can differentiate between individuals who meet the recommended levels of PA and those who do not (p &lt; 0.001). The IPAB scores ranged between 1.00-3.11 for the paper format (mean=1.78) and 1.07-3.48 for the electronic format (mean=1.78), with no statistical difference between the paper and electronic administration formats (p=0.94), resulting in the conclusion that the two administration formats can be used interchangeably. Participant feedback illustrates that the IPAB is easy to use, has clear instruction, and is an appropriate length. The newly validated IPAB scale can be used to develop individualized PA interventions that address PA barriers among patients 50 years and older.


2020 ◽  
Vol 52 (7) ◽  
pp. S61
Author(s):  
Audrey Hemmer ◽  
Carmen Fightmaster ◽  
Youn Seon Lim ◽  
Melinda Butsch Kovacic ◽  
Seung-Yeon Lee

2019 ◽  
Vol 7 (3) ◽  
pp. e000124
Author(s):  
Fatemeh Rezaei ◽  
Mohammad R Maracy ◽  
Mohammad H Yarmohammadian ◽  
Ali Ardalan ◽  
Mahmood Keyvanara

The purpose of this study was to develop a tool for community-based health organisations (CBHOs) to evaluate the preparedness in biohazards concerning epidemics or bioterrorism. We searched concepts on partnerships of CBHOs with health systems in guidelines of the Centers for Disease Control and Prevention and literature. Then, we validated the researcher-made tool by face validity, content validity, exploratory factor analysis (EFA), confirmatory factor analysis (CFA) and criterion validity. Data were collected by sending the tool to 620 CBHOs serving under supervision of Iran’s ministry of health. Opinions of health professionals and stakeholders in CBHOs were used to assess face and content validity. Factor loads in EFA were based on three-factor structure that verified by CFA. We used SPSS V.18 and Mplus7 software for statistical analysis. About 105 health-based CBHOs participated. After conducting face validity and calculating content validity ratio and content validity index, we reached 54 items in the field of planning, training and infrastructure. We conducted construct validity using 105 CBHOs. Three items exchanged between the fields according to factor loads in EFA, and CFA verified the model fit as Comparative Fit Index, Tucker-Lewis index and root mean square error of approximation were 0.921, 0918 and 0.052, respectively. The Cronbach’s of the whole tool was 0.944. Spearman correlation coefficient confirmed criterion validity as coefficient was 0.736. Planning, training and infrastructure fields are the most important aspects of preparedness in health-based CBHOs. Applying the new assessment tool in future studies will show the weaknesses and capabilities of health-based CBHOs in biohazard and clear necessary intervention actions for health authorities.


Author(s):  
Mikki H. Phan ◽  
Joseph R. Keebler ◽  
Barbara S. Chaparro

Objective: The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Background: Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players’ attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. Method: The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis ( N = 629), and confirmatory factor analysis ( N = 729) was implemented. Results: A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. Conclusion: The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. Application: The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users.


2017 ◽  
Vol 25 (97) ◽  
pp. 1014-1031
Author(s):  
Cristina Costa-Lobo ◽  
Marta Abelha ◽  
Themys Carvalho

Abstract This paper explains the validity and reliability of the Scale of Satisfaction with Teachers Dynamics (ESDTD). The ESDTD evaluates the conceptual representation of teachers with their curriculum conceptions, curriculum development, curriculum management, educational project and collaborative work; also, the satisfaction of teachers with the work done by direction, by sub-departments, through direct coordination and class councils and by heads of educational units. In the final items of ESDTD, it is assessed the perceptions of teachers about the grouping culture, signaling the aspects considered positive and problematic. It was tested the possibility of making a factor analysis and subsequently assessing the psychometric data and the reliability of each dimension, in order to test the internal validity of the scale. There is evidence of the appropriateness of factor analysis. More specifically, the adequacy measured sample of Kaiser-Meyer-Olkin, and the value of the Bartlett’s sphericity test revealed highly significant. It was rated the variance explained by the main components analysis, previously setting the analysis in six factors with values greater than 1. When setting the analysis in six main components, the dimensions explained more than 55% of the total variability. The analysis of the reliability of the size and the assessment of the homogeneity of the items allows obtaining positive and very high internal consistency values for all items and for all of the dimensions. The values found permit to maintain the structure and distribution of initial items. The scale shows good validity and reliability, it is expected other studies to be developed, complementing its psychometric analysis.


Sign in / Sign up

Export Citation Format

Share Document