scholarly journals The Ubiquitous Cognitive Assessment Tool for Smartwatches: Design, Implementation, and Evaluation Study

10.2196/17506 ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. e17506
Author(s):  
Pegah Hafiz ◽  
Jakob Eyvind Bardram

Background Cognitive functioning plays a significant role in individuals’ mental health, since fluctuations in memory, attention, and executive functions influence their daily task performance. Existing digital cognitive assessment tools cannot be administered in the wild and their test sets are not brief enough to capture frequent fluctuations throughout the day. The ubiquitous availability of mobile and wearable devices may allow their incorporation into a suitable platform for real-world cognitive assessment. Objective The aims of this study were threefold: (1) to evaluate a smartwatch-based tool for the assessment of cognitive performance, (2) to investigate the usability of this tool, and (3) to understand participants’ perceptions regarding the application of a smartwatch in cognitive assessment. Methods We built the Ubiquitous Cognitive Assessment Tool (UbiCAT) on a smartwatch-based platform. UbiCAT implements three cognitive tests—an Arrow test, a Letter test, and a Color test—adapted from the two-choice reaction-time, N-back, and Stroop tests, respectively. These tests were designed together with domain experts. We evaluated the UbiCAT test measures against standard computer-based tests with 21 healthy adults by applying statistical analyses significant at the 95% level. Usability testing for each UbiCAT app was performed using the Mobile App Rating Scale (MARS) questionnaire. The NASA-TLX (Task Load Index) questionnaire was used to measure cognitive workload during the N-back test. Participants rated perceived discomfort of wearing a smartwatch during the tests using a 7-point Likert scale. Upon finishing the experiment, an interview was conducted with each participant. The interviews were transcribed and semantic analysis was performed to group the findings. Results Pearson correlation analysis between the total correct responses obtained from the UbiCAT and the computer-based tests revealed a significant strong correlation (r=.78, P<.001). One-way analysis of variance (ANOVA) showed a significant effect of the N-back difficulty level on the participants' performance measures. The study also demonstrated usability ratings above 4 out of 5 in terms of aesthetics, functionality, and information. Low discomfort (<3 out of 7) was reported by our participants after using the UbiCAT. Seven themes were extracted from the transcripts of the interviews conducted with our participants. Conclusions UbiCAT is a smartwatch-based tool that assesses three key cognitive domains. Usability ratings showed that participants were engaged with the UbiCAT tests and did not feel any discomfort. The majority of the participants were interested in using the UbiCAT, although some preferred computer-based tests, which might be due to the widespread use of personal computers. The UbiCAT can be administered in the wild with mentally ill patients to assess their attention, working memory, and executive function.

2019 ◽  
Author(s):  
Pegah Hafiz ◽  
Jakob Eyvind Bardram

BACKGROUND Cognitive functioning plays a significant role in individuals’ mental health, since fluctuations in memory, attention, and executive functions influence their daily task performance. Existing digital cognitive assessment tools cannot be administered in the wild and their test sets are not brief enough to capture frequent fluctuations throughout the day. The ubiquitous availability of mobile and wearable devices may allow their incorporation into a suitable platform for real-world cognitive assessment. OBJECTIVE The aims of this study were threefold: (1) to evaluate a smartwatch-based tool for the assessment of cognitive performance, (2) to investigate the usability of this tool, and (3) to understand participants’ perceptions regarding the application of a smartwatch in cognitive assessment. METHODS We built the Ubiquitous Cognitive Assessment Tool (UbiCAT) on a smartwatch-based platform. UbiCAT implements three cognitive tests—an Arrow test, a Letter test, and a Color test—adapted from the two-choice reaction-time, N-back, and Stroop tests, respectively. These tests were designed together with domain experts. We evaluated the UbiCAT test measures against standard computer-based tests with 21 healthy adults by applying statistical analyses significant at the 95% level. Usability testing for each UbiCAT app was performed using the Mobile App Rating Scale (MARS) questionnaire. The NASA-TLX (Task Load Index) questionnaire was used to measure cognitive workload during the N-back test. Participants rated perceived discomfort of wearing a smartwatch during the tests using a 7-point Likert scale. Upon finishing the experiment, an interview was conducted with each participant. The interviews were transcribed and semantic analysis was performed to group the findings. RESULTS Pearson correlation analysis between the total correct responses obtained from the UbiCAT and the computer-based tests revealed a significant strong correlation (<i>r</i>=.78, <i>P</i>&lt;.001). One-way analysis of variance (ANOVA) showed a significant effect of the N-back difficulty level on the participants' performance measures. The study also demonstrated usability ratings above 4 out of 5 in terms of aesthetics, functionality, and information. Low discomfort (&lt;3 out of 7) was reported by our participants after using the UbiCAT. Seven themes were extracted from the transcripts of the interviews conducted with our participants. CONCLUSIONS UbiCAT is a smartwatch-based tool that assesses three key cognitive domains. Usability ratings showed that participants were engaged with the UbiCAT tests and did not feel any discomfort. The majority of the participants were interested in using the UbiCAT, although some preferred computer-based tests, which might be due to the widespread use of personal computers. The UbiCAT can be administered in the wild with mentally ill patients to assess their attention, working memory, and executive function.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Birgit M. Kaiser ◽  
Tamara Stelzl ◽  
Kurt Gedrich

<p>The quality of nutrition apps can be evaluated by applying scientifically validated instruments. The objective of this study was to perform an in-depth quality-analysis of nutrition-related apps and to identify communalities and limitations of different assessment tools. Based on a keyword search for “nutrition” within the German Google Play Store, ten nutrition-related apps were selected and evaluated for quality using the App Quality Evaluation (AQEL), Mobile App Rating Scale (MARS) and “ENLIGHT” tools. The analyses highlighted discrepancies in app qualities regarding performance, credibility, security and user benefits. Given the three evaluation tools, each of which focuses on different aspects of quality, they cover a broad spectrum of quality criteria is covered. However, there are also overlaps in the evaluation categories function and functionality, credibility and evidence-base. Due to distinct scoring systems within the tools, overlapping categories were not interchangeable and aggravated a comprehensive app quality rating. Our findings indicate that AQEL, MARS and ENLIGHT, on a stand-alone basis, are suitable tools to assess individual aspects of quality for nutrition apps, without being exhaustive. A series of additional important quality aspects was identified, which can make an important contribution towards the development of an overarching quality assessment tool specific for nutrition apps.</p><p> </p><p>Die Qualität von Ernährungs-Apps kann durch die Anwendung wissenschaftlich validierter Instrumente evaluiert werden. Ziel dieser Studie war es, eine detaillierte Qualitätsanalyse ernährungsbezogener Applikationen durchzuführen und Gemeinsamkeiten und Limitationen verschiedener Bewertungstools zu identifizieren. Basierend auf einer Schlagwortsuche zu „nutrition“ innerhalb des deutschen Google Play Stores wurden zehn ernährungsbezogene Apps ausgewählt und hinsichtlich ihrer Qualität bewertet. Hierzu wurden die App Quality Evaluation (AQEL), Mobile App Rating Scale (MARS) und “ENLIGHT” Instrumente eingesetzt. Die Analyse verdeutlicht die Diskrepanzen bezüglich der App Qualität hinsichtlich Performanz, Glaubwürdigkeit, Sicherheit und Nutzervorteile. Die hierzu verwendeten Evaluationsinstrumente fokussieren sich auf unterschiedliche Qualitätsaspekte und decken ein breites Spektrum an Qualitätskriterien ab. Überschneidungen existieren hinsichtlich der Evaluationskategorien Funktion und Funktionalität, Glaubwürdigkeit und Evidenzbasierung. Aufgrund der abweichenden Bewertungssysteme ist es nicht möglich gleiche Kategorien auszutauschen, welches eine allumfassende App-Qualitätsbewertung erschwert. Unsere Ergebnisse weisen darauf hin, dass AQEL, MARS und ENLIGHT individuell betrachtet, geeignete Instrumente zum Erfassen einzelner Qualitätsaspekte von Ernährungs-Apps darstellen, jedoch nicht allumfassend sind. Ergänzend wurden weitere wichtige Qualitätsaspekte identifiziert, die einen bedeutsamen Beitrag zur Entwicklung eines allumfassenden Qualitätsinstruments zur Bewertung von Ernährungs-Apps liefern könnten.</p><p> </p><p><strong> Article visualizations:</strong></p><p><img src="/-counters-/edu_01/0603/a.php" alt="Hit counter" /></p>


2020 ◽  
Author(s):  
Mahsa Roozrokh Arshadi Montazer ◽  
Roohollah Zahediannasb ◽  
Roxana Sharifian ◽  
Mahshid Tahamtan ◽  
Mahdi Nasiri ◽  
...  

AbstractBackgroundMild cognitive impairment (MCI) is an intermediate stage of cognitive decline fitting in-between normal cognition and dementia. With the growing aging population, this study aimed to develop and psychometrically validate an android-based application for early MCI detection in elderly subjects.MethodThis study was conducted in two phases, including 1-Initial design and prototyping of the application named M-Check, 2-psychometric evaluation. After the design and development of the M-Check app, it was evaluated by experts and elderly subjects. Face validity was determined by two checklists provided to the expert panel and the elderly subjects. Convergent validity of the M-Check app was assessed using the Montreal Cognitive Assessment (MoCA) battery through Pearson correlation. Test-retest and internal consistency and reliability were evaluated using Intra-Class Correlation (ICC) and Kuder-Richardson coefficients, respectively. In addition, the usability was assessed by a System Usability Scale (SUS) questionnaire. SPSS 16.0 was employed to analyze the data.ResultThe app’s usability assessment by elderlies and experts scored 77.11 and 82.5, respectively. Also, the correlation showed that the M-Check app was negatively correlated with the MoCA test (r = -0.71, p <0.005), and the ICC was more than 0.7. Moreover, the Richardson’s Coder coefficient was 0.82, corresponding to an acceptable reliability.ConclusionIn this study, we validated the M-Check app for the detection of MCI based on the growing need for cognitive assessment tools that can identify early decline. Such screeners are expected to take much shorter time than typical neuropsychological batteries do. Additional work are yet to be underway to ensure that M-Check is ready to launch and used without the presence of a trained person.


Author(s):  
Denise Villanyi ◽  
Romain Martin ◽  
Philipp Sonnleitner ◽  
Christina Siry ◽  
Antoine Fischbach

Although student self-assessment is positively related to achievement, skepticism about the accuracy of students’ self-assessments remains. A few studies have shown that even elementary school students are able to provide accurate self-assessments when certain conditions are met. We developed an innovative tablet-computer-based tool for capturing self-assessments of mathematics and reading comprehension. This tool integrates the conditions required for accurate self-assessment: (1) a non-competitive setting, (2) items formulated on the task level, and (3) limited reading and no verbalization required. The innovation consists of using illustrations and a language-reduced rating scale. The correlations between students’ self-assessment scores and their standardized test scores were moderate to large. Independent of their proficiency level, students’ confidence in completing a task decreased as task difficulty increased, but these findings were more consistent in mathematics than in reading comprehension. We conclude that third- and fourth-graders have the ability to provide accurate self-assessments of their competencies, particularly in mathematics, when provided with an adequate self-assessment tool.


Stroke ◽  
2015 ◽  
Vol 46 (5) ◽  
pp. 1374-1376 ◽  
Author(s):  
Svante Wallmark ◽  
Erik Lundström ◽  
Johan Wikström ◽  
Elisabeth Ronne-Engström

Background and Purpose— The aim of this pilot study was to assess attention deficits in patients with aneurysmal subarachnoid hemorrhage using the test of variables of attention (TOVA). This is a computer-based continuous performance test providing objective measures of attention. We also compared the TOVA results with the attention and concentration domains of Montgomery Åsberg Depression Rating Scale and Montreal cognitive assessment, 2 examiner-administrated neuropsychological instruments. Methods— Nineteen patients with moderate to good recovery (Glasgow outcome scale, 4–5) were assessed using the TOVA, Montgomery Åsberg Depression Rating Scale, and Montreal cognitive assessment. The measurements were done when the patients visited the hospital for a routine magnetic resonance imaging control of the aneurysm. Results— TOVA performance was pathological in 58%. The dominating pattern was a worsening of performance in the second half of the test, commonly a failing to react to correct stimuli. We found no correlation between TOVA and the performance in concentration and attention domains of Montgomery Åsberg Depression Rating Scale and Montreal cognitive assessment. Conclusions— Attention deficits, measured by the TOVA, were common after subarachnoid hemorrhage. This should be further studied to improve outcome.


10.2196/14479 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e14479 ◽  
Author(s):  
Eva-Maria Messner ◽  
Yannik Terhorst ◽  
Antonia Barke ◽  
Harald Baumeister ◽  
Stoyan Stoyanov ◽  
...  

Background The number of mobile health apps (MHAs), which are developed to promote healthy behaviors, prevent disease onset, manage and cure diseases, or assist with rehabilitation measures, has exploded. App store star ratings and descriptions usually provide insufficient or even false information about app quality, although they are popular among end users. A rigorous systematic approach to establish and evaluate the quality of MHAs is urgently needed. The Mobile App Rating Scale (MARS) is an assessment tool that facilitates the objective and systematic evaluation of the quality of MHAs. However, a German MARS is currently not available. Objective The aim of this study was to translate and validate a German version of the MARS (MARS-G). Methods The original 19-item MARS was forward and backward translated twice, and the MARS-G was created. App description items were extended, and 104 MHAs were rated twice by eight independent bilingual researchers, using the MARS-G and MARS. The internal consistency, validity, and reliability of both scales were assessed. Mokken scale analysis was used to investigate the scalability of the overall scores. Results The retranslated scale showed excellent alignment with the original MARS. Additionally, the properties of the MARS-G were comparable to those of the original MARS. The internal consistency was good for all subscales (ie, omega ranged from 0.72 to 0.91). The correlation coefficients (r) between the dimensions of the MARS-G and MARS ranged from 0.93 to 0.98. The scalability of the MARS (H=0.50) and MARS-G (H=0.48) were good. Conclusions The MARS-G is a reliable and valid tool for experts and stakeholders to assess the quality of health apps in German-speaking populations. The overall score is a reliable quality indicator. However, further studies are needed to assess the factorial structure of the MARS and MARS-G.


2021 ◽  
Vol 28 (5) ◽  
pp. 3987-4003
Author(s):  
Gina Tuch ◽  
Wee Kheng Soo ◽  
Ki-Yung Luo ◽  
Kinglsey Frearson ◽  
Ek Leone Oh ◽  
...  

Cognitive assessment is a cornerstone of geriatric care. Cognitive impairment has the potential to significantly impact multiple phases of a person’s cancer care experience. Accurately identifying this vulnerability is a challenge for many cancer care clinicians, thus the use of validated cognitive assessment tools are recommended. As international cancer guidelines for older adults recommend Geriatric Assessment (GA) which includes an evaluation of cognition, clinicians need to be familiar with the overall interpretation of the commonly used cognitive assessment tools. This rapid review investigated the cognitive assessment tools that were most frequently recommended by Geriatric Oncology guidelines: Blessed Orientation-Memory-Concentration test (BOMC), Clock Drawing Test (CDT), Mini-Cog, Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Short Portable Mental Status Questionnaire (SPMSQ). A detailed appraisal of the strengths and limitations of each tool was conducted, with a focus on practical aspects of implementing cognitive assessment tools into real-world clinical settings. Finally, recommendations on choosing an assessment tool and the additional considerations beyond screening are discussed.


Children ◽  
2020 ◽  
Vol 7 (10) ◽  
pp. 183
Author(s):  
Wei-Sheng Lin ◽  
Shan-Ju Lin ◽  
Ting-Rong Hsu

Cognitive impairment is increasingly recognized as an important clinical issue in pediatric multiple sclerosis (MS). However, variations regarding its assessment and remediation are noted in clinical arena. This scoping review aims to collate available evidence concerning cognitive assessment tool and cognitive rehabilitation for pediatric MS. We performed a systematic search of electronic databases (MEDLINE, PubMed, CINAHL Plus, and Web of Science) from inception to February 2020. Reference lists of included articles and trial registers were also searched. We included original studies published in English that addressed cognitive assessment tools or cognitive rehabilitation for pediatric-onset MS. Fourteen studies fulfilled our inclusion criteria. Among them, 11 studies evaluated the psychometric aspects of various cognitive assessment tools in the context of pediatric MS, and different neuro-cognitive domains were emphasized across studies. There were only three pilot studies reporting cognitive rehabilitation for pediatric-onset MS, all of which used home-based computerized programs targeting working memory and attention, respectively. Overall, more systematic research on cognitive assessment tools and rehabilitation for pediatric MS is needed to inform evidence-based practice. Computer-assisted cognitive assessment and rehabilitation appear feasible and deserve further studies.


2015 ◽  
Vol 7 (1) ◽  
Author(s):  
Michael Hodges ◽  
Chong Lee ◽  
Kent A. Lorenz ◽  
Daniel Cipriani

Summary Study aim: this study examined the item difficulty and item discrimination scores for the HRFK PE Metrics cognitive assessment tool for 5th-grade students. Materials and methods: ten elementary physical education teachers volunteered to participate. Based on convenience, participating teachers selected two 5th grade physical education classes. Teachers then gave students (N = 633) a 28-question paper and pencil HRFK exam using PE Metrics Standards 3 and 4. Item difficulty and discrimination analysis and Rasch Modeling were used data to determine underperforming items. Results: analysis suggests that at least three items are problematic. The Rasch Model confirmed this result and identified similar items with high outfit mean square values and low Point Biserial correlation values. Conclusions: teachers are in need of valid and reliable HRFK assessment tools. Without the removal of three items in the PE Metrics HRFK exam for 5th-grade students, complete use of the exam could offer incorrect conclusions.


2020 ◽  
Vol 20 (3) ◽  
pp. 171-175 ◽  
Author(s):  
Junta Takahashi ◽  
Hisashi Kawai ◽  
Hiroyuki Suzuki ◽  
Yoshinori Fujiwara ◽  
Yutaka Watanabe ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document