Cognitive Assessments for Patients With Neurological Conditions: A Preliminary Survey of Speech-Language Pathology Practice Patterns

Author(s):  
Jane Roitsch ◽  
Jessica Prebor ◽  
Anastasia M. Raymer

Purpose Speech-language pathologists (SLPs) are often responsible for assessing cognitive disorders that affect communication for individuals with diagnosed or suspected acute or degenerative neurological conditions. However, consensus on appropriate assessment tools for various neurological disorders remains elusive. This preliminary survey was conducted to study current practices in the use of published and unpublished tools by SLPs when assessing cognitive-communication impairments across common neurologic conditions. Method An 18-item web-based survey was sent to SLPs through ASHA Communities and social media, asking them to select which cognitive assessment tools they use to evaluate the cognitive-communication status of individuals with Parkinson's disease, multiple sclerosis, dementia, stroke (i.e., cerebrovascular accident), and traumatic brain injury. The 100 SLPs who completed the online survey represent a spectrum of professionals seeing neurologic patients across the United States. Results Among the 100 responding SLPs, no unique pattern of assessment tool use was noted across neurologic disorders as indicated by a chi-square analysis. A common set of nonstandardized and observational assessment practices was reported most commonly, regardless of the neurologic condition. Conclusions This study shows consistent cognitive assessment practices by SLPs across various neurological conditions rather than unique protocols relevant to the patterns typical across disorders. However, the amount of clinical evaluations supported by informal observation and/or the completion of select subtests of standardized assessment tools is considerable. This preliminary information conflicts with principles of rigorous assessment and increases the risk of erroneous findings when identifying cognitive impairments. Further research into the decision-making process of clinician assessment selection is warranted to encourage consistent, evidence-based practice for persons with cognitive impairments. Better recognition of the limitations imposed by providing clinical services that impact the reliability and validity of cognitive assessments can drive future clinical practice policy and practice recommendations.

2020 ◽  
Vol 39 (5) ◽  
pp. 270-282
Author(s):  
Julie Jensen DelFavero ◽  
Amy J. Jnah ◽  
Desi Newberry

Glucose-6-phosphate dehydrogenase (G6PD) deficiency, the most common enzymopathy worldwide, is an insufficient amount of the G6PD enzyme, which is vital to the protection of the erythrocyte. Deficient enzyme levels lead to oxidative damage, hemolysis, and resultant severe hyperbilirubinemia. If not promptly recognized and treated, G6PD deficiency can potentially lead to bilirubin-induced neurologic dysfunction, acute bilirubin encephalopathy, and kernicterus. Glucose-6-phosphate dehydrogenase deficiency is one of the three most common causes for pathologic hyperbilirubinemia. A change in migration patterns and intercultural marriages have created an increased incidence of G6PD deficiency in the United States. Currently, there is no universally mandated metabolic screening or clinical risk assessment tool for G6PD deficiency in the United States. Mandatory universal screening for G6PD deficiency, which includes surveillance and hospital-based risk assessment tools, can identify the at-risk infant and foster early identification, diagnosis, and treatment to eliminate neurotoxicity.


2021 ◽  
Vol 28 (5) ◽  
pp. 3987-4003
Author(s):  
Gina Tuch ◽  
Wee Kheng Soo ◽  
Ki-Yung Luo ◽  
Kinglsey Frearson ◽  
Ek Leone Oh ◽  
...  

Cognitive assessment is a cornerstone of geriatric care. Cognitive impairment has the potential to significantly impact multiple phases of a person’s cancer care experience. Accurately identifying this vulnerability is a challenge for many cancer care clinicians, thus the use of validated cognitive assessment tools are recommended. As international cancer guidelines for older adults recommend Geriatric Assessment (GA) which includes an evaluation of cognition, clinicians need to be familiar with the overall interpretation of the commonly used cognitive assessment tools. This rapid review investigated the cognitive assessment tools that were most frequently recommended by Geriatric Oncology guidelines: Blessed Orientation-Memory-Concentration test (BOMC), Clock Drawing Test (CDT), Mini-Cog, Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Short Portable Mental Status Questionnaire (SPMSQ). A detailed appraisal of the strengths and limitations of each tool was conducted, with a focus on practical aspects of implementing cognitive assessment tools into real-world clinical settings. Finally, recommendations on choosing an assessment tool and the additional considerations beyond screening are discussed.


Children ◽  
2020 ◽  
Vol 7 (10) ◽  
pp. 183
Author(s):  
Wei-Sheng Lin ◽  
Shan-Ju Lin ◽  
Ting-Rong Hsu

Cognitive impairment is increasingly recognized as an important clinical issue in pediatric multiple sclerosis (MS). However, variations regarding its assessment and remediation are noted in clinical arena. This scoping review aims to collate available evidence concerning cognitive assessment tool and cognitive rehabilitation for pediatric MS. We performed a systematic search of electronic databases (MEDLINE, PubMed, CINAHL Plus, and Web of Science) from inception to February 2020. Reference lists of included articles and trial registers were also searched. We included original studies published in English that addressed cognitive assessment tools or cognitive rehabilitation for pediatric-onset MS. Fourteen studies fulfilled our inclusion criteria. Among them, 11 studies evaluated the psychometric aspects of various cognitive assessment tools in the context of pediatric MS, and different neuro-cognitive domains were emphasized across studies. There were only three pilot studies reporting cognitive rehabilitation for pediatric-onset MS, all of which used home-based computerized programs targeting working memory and attention, respectively. Overall, more systematic research on cognitive assessment tools and rehabilitation for pediatric MS is needed to inform evidence-based practice. Computer-assisted cognitive assessment and rehabilitation appear feasible and deserve further studies.


2021 ◽  
Vol 36 (6) ◽  
pp. 1244-1244
Author(s):  
Joshua T Fox-Fuller ◽  
Sandra Rizer ◽  
Stacy L Andersen ◽  
Preeti Sunderaraman

Abstract Objective Teleneuropsychology (TeleNP) has experienced tremendous uptake during the coronavirus pandemic, and there is a need to document the challenges and practical advice for conducting remote cognitive assessments. Method 87 respondents (licensed neuropsychologists = 56; others [e.g., trainees] = 31) conducting TeleNP evaluations with adult populations in the United States completed an online survey which was distributed via social media and list-servs in winter 2020–2021. Respondents were asked about their TeleNP experiences, including issues encountered and solutions to TeleNP challenges. Frequency analyses were conducted to examine the proportion of respondents endorsing specific TeleNP challenges. TeleNP advice was thematically coded to identify the most common suggestions to overcome/navigate these challenges. Results The most frequently reported TeleNP challenges included: poor internet connectivity (examinee’s home: 82.8%; an unknown source 58.6%); environmental distractions in the examinee’s location (78.2%); poor audio quality (55.2%); examinee’s unfamiliarity with the videoconferencing technology (52.9%); inability to easily conduct visuoconstructional tasks (52.9%) or adapt tests/find TeleNP norms (47.1%); and examinees’ limited access to technology (57.5%) or complete lack of access (35.6%). The most common responses to mitigate these challenges included: providing detailed instructions about the TeleNP visit and examinee expectations in advance; having a clear back-up or assistive plan (e.g., telephone call); and using TeleNP sparingly (e.g., for interview only). Conclusion These survey results reflect widely-encountered challenges with remote cognitive assessment and identify priority targets for increasing the feasibility and reliability of TeleNP. Findings can be incorporated into discussion about formalized TeleNP competencies.


2015 ◽  
Vol 7 (1) ◽  
Author(s):  
Michael Hodges ◽  
Chong Lee ◽  
Kent A. Lorenz ◽  
Daniel Cipriani

Summary Study aim: this study examined the item difficulty and item discrimination scores for the HRFK PE Metrics cognitive assessment tool for 5th-grade students. Materials and methods: ten elementary physical education teachers volunteered to participate. Based on convenience, participating teachers selected two 5th grade physical education classes. Teachers then gave students (N = 633) a 28-question paper and pencil HRFK exam using PE Metrics Standards 3 and 4. Item difficulty and discrimination analysis and Rasch Modeling were used data to determine underperforming items. Results: analysis suggests that at least three items are problematic. The Rasch Model confirmed this result and identified similar items with high outfit mean square values and low Point Biserial correlation values. Conclusions: teachers are in need of valid and reliable HRFK assessment tools. Without the removal of three items in the PE Metrics HRFK exam for 5th-grade students, complete use of the exam could offer incorrect conclusions.


2017 ◽  
Vol 40 (3) ◽  
pp. 295-307 ◽  
Author(s):  
David P. Schary ◽  
Alexis L. Waldron

Challenge course programs influence a variety of psychological, social, and educational outcomes. Yet, many challenges exist when measuring challenge course outcomes like logistical constraints and a lack of specific assessment tools. This study piloted and tested an assessment tool designed for facilitators to measure participant outcomes in challenge course programs. Data collection occurred in three separate but related studies with participants in two different challenge course environments from two regions of the United States. Through confirmatory factor analysis, a two-factor structure in challenge course participation was supported. The Challenge Course Experience Questionnaire (CCEQ) consists of challenge course participants’ (a) individual experience and (b) feelings of group support. During the first study, the structure was created and initial evidence of reliability was indicated. The second study examined the structure and reliability with a similar population. The third study confirmed the structure and reliability using a different population and challenge course program. The CCEQ is a preliminary step toward helping challenge course professionals improve their programming through statistical evaluation of desired outcomes.


2019 ◽  
Author(s):  
Pegah Hafiz ◽  
Jakob Eyvind Bardram

BACKGROUND Cognitive functioning plays a significant role in individuals’ mental health, since fluctuations in memory, attention, and executive functions influence their daily task performance. Existing digital cognitive assessment tools cannot be administered in the wild and their test sets are not brief enough to capture frequent fluctuations throughout the day. The ubiquitous availability of mobile and wearable devices may allow their incorporation into a suitable platform for real-world cognitive assessment. OBJECTIVE The aims of this study were threefold: (1) to evaluate a smartwatch-based tool for the assessment of cognitive performance, (2) to investigate the usability of this tool, and (3) to understand participants’ perceptions regarding the application of a smartwatch in cognitive assessment. METHODS We built the Ubiquitous Cognitive Assessment Tool (UbiCAT) on a smartwatch-based platform. UbiCAT implements three cognitive tests—an Arrow test, a Letter test, and a Color test—adapted from the two-choice reaction-time, N-back, and Stroop tests, respectively. These tests were designed together with domain experts. We evaluated the UbiCAT test measures against standard computer-based tests with 21 healthy adults by applying statistical analyses significant at the 95% level. Usability testing for each UbiCAT app was performed using the Mobile App Rating Scale (MARS) questionnaire. The NASA-TLX (Task Load Index) questionnaire was used to measure cognitive workload during the N-back test. Participants rated perceived discomfort of wearing a smartwatch during the tests using a 7-point Likert scale. Upon finishing the experiment, an interview was conducted with each participant. The interviews were transcribed and semantic analysis was performed to group the findings. RESULTS Pearson correlation analysis between the total correct responses obtained from the UbiCAT and the computer-based tests revealed a significant strong correlation (<i>r</i>=.78, <i>P</i>&lt;.001). One-way analysis of variance (ANOVA) showed a significant effect of the N-back difficulty level on the participants' performance measures. The study also demonstrated usability ratings above 4 out of 5 in terms of aesthetics, functionality, and information. Low discomfort (&lt;3 out of 7) was reported by our participants after using the UbiCAT. Seven themes were extracted from the transcripts of the interviews conducted with our participants. CONCLUSIONS UbiCAT is a smartwatch-based tool that assesses three key cognitive domains. Usability ratings showed that participants were engaged with the UbiCAT tests and did not feel any discomfort. The majority of the participants were interested in using the UbiCAT, although some preferred computer-based tests, which might be due to the widespread use of personal computers. The UbiCAT can be administered in the wild with mentally ill patients to assess their attention, working memory, and executive function.


2021 ◽  
Vol 10 (1) ◽  
pp. 14-20
Author(s):  
Aliza Imtiaz

BACKGROUND AND AIMS Cognition is defined as the ability to perceive process and comprehend the information from the surrounding. Impairment in cognitive skills can significantly affect individual’s performance. The objective of the study is to identify the commonly used cognitive assessment tools by occupational therapist and determine its significance in occupational therapy practice. METHODOLOGY A total of 150 participants were enrolled in the cross-sectional survey that was responded by occupational therapist working in pediatric domain specifically in outpatient rehabilitation setting. The self-structured questionnaire was validated by factor analysis through SPSS. RESULTS: The findings of this study revealed that 96% of occupational therapists performed cognitive assessment out of which only 9.6% occupational therapist administer standardized assessments due to incompetency and lack of resources the rest use informal mode of assessments. Mini mental state examination (MMSE) is found to be most common cognitive assessment tool while tools like Loewenstein Occupational Therapy Cognitive Assessment (LOTCA) is rarely used though it is very significant for Activities of daily living (ADL) cognition CONCLUSION It was concluded that the need of cognitive assessment in pediatric setting is an integral component in occupational therapy process for authentic evaluation and effective intervention plan. The curriculum must promote efficient training of standardize assessment and resources should be provided for better outcome and prognosis.


2020 ◽  
Author(s):  
Nele Demeyere ◽  
Marleen Haupt ◽  
Sam Sappho Webb ◽  
Lea Strobel ◽  
Elise Milosevich ◽  
...  

ObjectiveHere, we present the Oxford Cognitive Screen-Plus, a computerised tablet-based screen designed to briefly assess domain-general cognition and provide more fine-grained measures of memory and executive function. The OCS-Plus was designed to sensitively screen for cognitive impairments and provide a differentiation between memory and executive deficits. MethodsThe OCS-Plus contains 10 subtasks and requires approximately 20 minutes to complete. In this study, 320 neurologically healthy ageing participants (age M=62.66, SD=13.75) from three sites completed the OCS-Plus. The convergent validity of this assessment was established in comparison to the ACE-R, CERAD and Rey-Osterrieth. Divergent validity was established through comparison with the BDI. Internal consistency of each subtask was evaluated, and test-retest reliability was determined. ResultsWe established the normative impairment cut-offs for each of the subtasks. Predicted convergent and divergent validity was found, high internal consistency for most measures was also found with the exception of restricted range tasks, as well as strong test-retest reliability, which provided evidence of test stability. Further research demonstrating the use and validity of the OCS-Plus in various clinical populations is required.ConclusionThe OCS-Plus is presented as a standardised cognitive assessment tool, normed and validated in a sample of neurologically healthy participants. The OCS-Plus will be available as an Android App and provides an automated report of domain-general cognitive impairments in executive attention and memory.


10.2196/17506 ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. e17506
Author(s):  
Pegah Hafiz ◽  
Jakob Eyvind Bardram

Background Cognitive functioning plays a significant role in individuals’ mental health, since fluctuations in memory, attention, and executive functions influence their daily task performance. Existing digital cognitive assessment tools cannot be administered in the wild and their test sets are not brief enough to capture frequent fluctuations throughout the day. The ubiquitous availability of mobile and wearable devices may allow their incorporation into a suitable platform for real-world cognitive assessment. Objective The aims of this study were threefold: (1) to evaluate a smartwatch-based tool for the assessment of cognitive performance, (2) to investigate the usability of this tool, and (3) to understand participants’ perceptions regarding the application of a smartwatch in cognitive assessment. Methods We built the Ubiquitous Cognitive Assessment Tool (UbiCAT) on a smartwatch-based platform. UbiCAT implements three cognitive tests—an Arrow test, a Letter test, and a Color test—adapted from the two-choice reaction-time, N-back, and Stroop tests, respectively. These tests were designed together with domain experts. We evaluated the UbiCAT test measures against standard computer-based tests with 21 healthy adults by applying statistical analyses significant at the 95% level. Usability testing for each UbiCAT app was performed using the Mobile App Rating Scale (MARS) questionnaire. The NASA-TLX (Task Load Index) questionnaire was used to measure cognitive workload during the N-back test. Participants rated perceived discomfort of wearing a smartwatch during the tests using a 7-point Likert scale. Upon finishing the experiment, an interview was conducted with each participant. The interviews were transcribed and semantic analysis was performed to group the findings. Results Pearson correlation analysis between the total correct responses obtained from the UbiCAT and the computer-based tests revealed a significant strong correlation (r=.78, P<.001). One-way analysis of variance (ANOVA) showed a significant effect of the N-back difficulty level on the participants' performance measures. The study also demonstrated usability ratings above 4 out of 5 in terms of aesthetics, functionality, and information. Low discomfort (<3 out of 7) was reported by our participants after using the UbiCAT. Seven themes were extracted from the transcripts of the interviews conducted with our participants. Conclusions UbiCAT is a smartwatch-based tool that assesses three key cognitive domains. Usability ratings showed that participants were engaged with the UbiCAT tests and did not feel any discomfort. The majority of the participants were interested in using the UbiCAT, although some preferred computer-based tests, which might be due to the widespread use of personal computers. The UbiCAT can be administered in the wild with mentally ill patients to assess their attention, working memory, and executive function.


Sign in / Sign up

Export Citation Format

Share Document