Stuttering mHealth Applications: A Qualitative Rubric Assessment (Preprint)

2020 ◽  
Author(s):  
Fazwa M. Fadzilah ◽  
Noreen Izza Arshad ◽  
Izuddin Zainal-Abidin ◽  
Hui Min Low ◽  
Ahmad Kamil Mahmood ◽  
...  

BACKGROUND Mobile applications (apps) that offer a variety of techniques to improve stuttering have been flourishing in the digital marketplace. In evidence-based clinical practice, speech therapists will recommend audio-enriched mobile apps to individuals with stuttering problems based on empirical research evidence. Unfortunately, many stuttering mobile apps available in the market are developed without a substantial research base. Hence, speech therapists necessitate a guideline which they could use to assess the quality of a stuttering mobile app before recommending the app to stutterers. OBJECTIVE The objective of this study is to develop a rubric for assessing the quality of the stuttering mobile app in assisting speech therapists to make informed recommendations METHODS The rubric was initially developed based on a set of criteria reviewed from the literature. Online surveys and focused group discussion were then conducted for results verification. RESULTS The outcome of this study is a rubric designed with four categories and 18-evaluative dimensions tailored to analyze the quality of stuttering mobile apps. The stuttering mobile app assessment rubric presented in the serve multiple purposes, including an evaluation instrument, providing guidelines for developing stuttering mobile apps and for creating a standard form that can be shared with professionals to facilitate a collective effort. CONCLUSIONS This rubric also offers a guidance to steer drive the future development of stuttering mobile apps that are evidence-based, and theoretically grounded

10.2196/18858 ◽  
2020 ◽  
Vol 8 (10) ◽  
pp. e18858
Author(s):  
Atiyeh Vaezipour ◽  
Jessica Campbell ◽  
Deborah Theodoros ◽  
Trevor Russell

Background Worldwide, more than 75% of people with acquired brain injury (ABI) experience communication disorders. Communication disorders are impairments in the ability to communicate effectively, that is, sending, receiving, processing, and comprehending verbal and nonverbal concepts and symbols. Such disorders may have enduring impacts on employment, social participation, and quality of life. Technology-enabled interventions such as mobile apps have the potential to increase the reach of speech-language therapy to treat communication disorders. However, ensuring that apps are evidence-based and of high quality is critical for facilitating safe and effective treatment for adults with communication disorders. Objective The aim of this review is to identify mobile apps that are currently widely available to adults with communication disorders for speech-language therapy and to assess their content and quality using the validated Mobile App Rating Scale (MARS). Methods Google Play Store, Apple App Store, and webpages were searched to identify mobile apps for speech-language therapy. Apps were included in the review if they were designed for the treatment of adult communication disorders after ABI, were in English, and were either free or for purchase. Certified speech-language pathologists used the MARS to assess the quality of the apps. Results From a total of 2680 apps identified from Google Play Store, Apple App Store, and web searches, 2.61% (70/2680) apps met the eligibility criteria for inclusion. Overall, 61% (43/70) were available for download on the iPhone Operating System (iOS) platform, 20% (14/70) on the Android platform, and 19% (13/70) on both iOS and Android platforms. A content analysis of the apps revealed 43 apps for language, 17 apps for speech, 8 apps for cognitive communication, 6 apps for voice, and 5 apps for oromotor function or numeracy. The overall MARS mean score was 3.7 out of 5, SD 0.6, ranging between 2.1 and 4.5, with functionality being the highest-scored subscale (4.3, SD 0.6), followed by aesthetics (3.8, SD 0.8), information (3.4, SD 0.6), and engagement (3.3, SD 0.6). The top 5 apps were Naming Therapy (4.6/5), Speech Flipbook Standard (4.6/5), Number Therapy (4.5/5), Answering Therapy, and Constant Therapy (4.4/5). Conclusions To our knowledge, this is the first study to systematically identify and evaluate a broad range of mobile apps for speech-language therapy for adults with communication disorders after sustaining ABI. We found a lack of interactive and engaging elements in the apps, a critical factor in sustaining self-managed speech-language therapy. More evidence-based apps with a focus on human factors, user experience, and a patient-led design approach are required to enhance effectiveness and long-term use.


10.2196/30404 ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. e30404
Author(s):  
Ko-Lin Wu ◽  
Rebeca Alegria ◽  
Jazzlyn Gonzalez ◽  
Harrison Hu ◽  
Haocen Wang ◽  
...  

Background Prenatal genetic testing is an essential part of routine prenatal care. Yet, obstetricians often lack the time to provide comprehensive prenatal genetic testing education to their patients. Pregnant women lack prenatal genetic testing knowledge, which may hinder informed decision-making during their pregnancies. Due to the rapid growth of technology, mobile apps are a potentially valuable educational tool through which pregnant women can learn about prenatal genetic testing and improve the quality of their communication with obstetricians. The characteristics, quality, and number of available apps containing prenatal genetic testing information are, however, unknown. Objective This study aims to conduct a firstreview to identify, evaluate, and summarize currently available mobile apps that contain prenatal genetic testing information using a systematic approach. Methods We searched both the Apple App Store and Google Play for mobile apps containing prenatal genetic testing information. The quality of apps was assessed based on the criteria adopted from two commonly used and validated mobile app scoring systems, including the Mobile Application Rating Scale (MARS) and the APPLICATIONS evaluation criteria. Results A total of 64 mobile apps were identified. Of these, only 2 apps were developed for a specific prenatal genetic test. All others were either pregnancy-related (61/64, 95%) or genetics-related (1/64, 2%) apps that provided prenatal genetic testing information. The majority of the apps (49/64, 77%) were developed by commercial companies. The mean quality assessment score of the included apps was 13.5 (SD 2.9), which was equal to the average of possible theoretical score. Overall, the main weaknesses of mobile apps in this review included the limited number of prenatal genetic tests mentioned; incomprehensiveness of testing information; unreliable and missing information sources; absence of developmental testing with users (not evidence based); high level of readability; and the lack of visual information, customization, and a text search field. Conclusions Our findings suggest that the quality of mobile apps with prenatal genetic testing information must be improved and that pregnant women should be cautious when using these apps for prenatal genetic testing information. Obstetricians should carefully examine mobile apps before referring any of them to their patients for use as an educational tool. Both improving the quality of existing mobile apps, and developing new, evidence-based, high-quality mobile apps targeting all prenatal genetic tests should be the focus of mobile app developers going forward.


2020 ◽  
Author(s):  
Atiyeh Vaezipour ◽  
Jessica Campbell ◽  
Deborah Theodoros ◽  
Trevor Russell

BACKGROUND Worldwide, more than 75% of people with acquired brain injury (ABI) experience communication disorders. Communication disorders are impairments in the ability to communicate effectively, that is, sending, receiving, processing, and comprehending verbal and nonverbal concepts and symbols. Such disorders may have enduring impacts on employment, social participation, and quality of life. Technology-enabled interventions such as mobile apps have the potential to increase the reach of speech-language therapy to treat communication disorders. However, ensuring that apps are evidence-based and of high quality is critical for facilitating safe and effective treatment for adults with communication disorders. OBJECTIVE The aim of this review is to identify mobile apps that are currently widely available to adults with communication disorders for speech-language therapy and to assess their content and quality using the validated Mobile App Rating Scale (MARS). METHODS Google Play Store, Apple App Store, and webpages were searched to identify mobile apps for speech-language therapy. Apps were included in the review if they were designed for the treatment of adult communication disorders after ABI, were in English, and were either free or for purchase. Certified speech-language pathologists used the MARS to assess the quality of the apps. RESULTS From a total of 2680 apps identified from Google Play Store, Apple App Store, and web searches, 2.61% (70/2680) apps met the eligibility criteria for inclusion. Overall, 61% (43/70) were available for download on the iPhone Operating System (iOS) platform, 20% (14/70) on the Android platform, and 19% (13/70) on both iOS and Android platforms. A content analysis of the apps revealed 43 apps for <i>language</i>, 17 apps for <i>speech</i>, 8 apps for <i>cognitive communication</i>, 6 apps for <i>voice</i>, and 5 apps for <i>oromotor function</i> or <i>numeracy</i>. The overall MARS mean score was 3.7 out of 5, SD 0.6, ranging between 2.1 and 4.5, with <i>functionality</i> being the highest-scored subscale (4.3, SD 0.6)<i>, followed by aesthetics</i> (3.8, SD 0.8), <i>information</i> (3.4, SD 0.6)<i>, and engagement</i> (3.3, SD 0.6). The top 5 apps were <i>Naming Therapy</i> (4.6/5), <i>Speech Flipbook Standard</i> (4.6/5), <i>Number Therapy</i> (4.5/5), <i>Answering Therapy</i>, and <i>Constant Therapy</i> (4.4/5). CONCLUSIONS To our knowledge, this is the first study to systematically identify and evaluate a broad range of mobile apps for speech-language therapy for adults with communication disorders after sustaining ABI. We found a lack of interactive and engaging elements in the apps, a critical factor in sustaining self-managed speech-language therapy. More evidence-based apps with a focus on human factors, user experience, and a patient-led design approach are required to enhance effectiveness and long-term use.


2020 ◽  
Author(s):  
Atiyeh Vaezipour ◽  
Jessica Campbell ◽  
Deborah Theodoros ◽  
Trevor Russell

UNSTRUCTURED Worldwide, more than 75% of people with acquired brain injury (ABI) experience communication disorders. Communication disorders are impairments in the ability to communicate effectively, that is, sending, receiving, processing, and comprehending verbal and nonverbal concepts and symbols. Such disorders may have enduring impacts on employment, social participation, and quality of life. Technology-enabled interventions such as mobile apps have the potential to increase the reach of speech-language therapy to treat communication disorders. However, ensuring that apps are evidence-based and of high quality is critical for facilitating safe and effective treatment for adults with communication disorders. The aim of this review is to identify mobile apps that are currently widely available to adults with communication disorders for speech-language therapy and to assess their content and quality using the validated Mobile App Rating Scale(MARS). Google Play Store, Apple App Store, and webpages were searched to identify mobile apps for speech-language therapy. Apps were included in the review if they were designed for the treatment of adult communication disorders after ABI, were in English, and were either free or for purchase. Certified speech-language pathologists used the MARS to assess the quality of the apps. From a total of 2680 apps identified from Google Play Store, Apple App Store, and web searches, 2.61% (70/2680) apps met the eligibility criteria for inclusion. Overall, 61% (43/70) were available for download on the iPhone Operating System (iOS) platform, 20% (14/70) on the Android platform, and 19% (13/70) on both iOS and Android platforms. A content analysis of the apps revealed 43 apps for language, 17 apps for speech, 8 apps for cognitive communication, 6 apps for voice, and 5 apps for oromotor function or numeracy. The overall MARS mean score was 3.7 out of 5, SD 0.6, ranging between 2.1 and 4.5, with functionality being the highest-scored subscale (4.3, SD 0.6), followed by aesthetics (3.8, SD 0.8), information (3.4, SD 0.6), and engagement (3.3, SD 0.6). The top 5 apps were Naming Therapy (4.6/5), Speech Flipbook Standard (4.6/5), Number Therapy(4.5/5), Answering Therapy, and Constant Therapy (4.4/5). To our knowledge, this is the first study to systematically identify and evaluate a broad range of mobile apps for speech-language therapy for adults with communication disorders after sustaining ABI. We found a lack of interactive and engaging elements in the apps, a critical factor in sustaining self-managed speech-language therapy. More evidence-based apps with a focus on human factors, user experience, and a patient-led design approach are required to enhance effectiveness and long-term use.


Author(s):  
Pamela Ugwudike ◽  
Gemma Morgan

This chapter presents the findings of a study that examined supervision skills within three youth offending teams. The study focused on youth justice practice in Wales and its objective was to explore how best to integrate research evidence into frontline practice. It found that participating practitioners employed mainly relationship skills. This is a positive finding but there was limited use of evidence-based skills embedded in what is described as the ‘structuring principle' of effective interpersonal interactions (Bonta and Andrews 2017). The skills are change-focused and they impact on what young people learn during interactions with practitioners and the quality of the influence the practitioners exert over them. This chapter examines the factors that impede the application of structuring skills and concludes with a discussion of the ways in which gaps between research and supervision practice can be bridged to enhance the quality of youth justice practice.


2021 ◽  
Author(s):  
Nicole E Werner ◽  
Janetta C Brown ◽  
Priya Loganathar ◽  
Richard J Holden

BACKGROUND The over 11 million care partners in the US who provide care to people living with Alzheimer’s disease and related dementias (ADRD) cite persistent and pervasive unmet needs related to all aspects of their caregiving role. The proliferation of mobile applications (apps) for care partners has potential to meet the care partners’ needs, but the quality of apps is unknown. OBJECTIVE The present study aimed to 1) evaluate the quality of publicly available apps for care partners of people living with ADRD and 2) identify design features of low- and high-quality apps to guide future research and app development. METHODS We searched the US Apple and Google Play app stores with the criteria that the app needed to be 1) available in US Google play or Apple app stores, 2) directly accessible to users “out of the box”, 3) primarily intended for use by an informal (family, friend) caregiver or caregivers of a person with dementia. The included apps were then evaluated using the Mobile App Rating Scale (MARS), which includes descriptive app classification and rating using 23 items across five dimensions: engagement, functionality, aesthetics, information, and subjective quality. Next, we computed descriptive statistics for each rating. To identify recommendations for future research and app development, we categorized rater comments on the score driving factors for each item and what the app could have done to improve the score for that item. RESULTS We evaluated 17 apps (41% iOS only, 12% Android only, 47% both iOS and Android). We found that on average, the apps are of minimally acceptable quality. Although we identified apps above and below minimally acceptable quality, many apps had broken features and were rated as below acceptable for engagement and information. CONCLUSIONS Minimally acceptable quality is likely insufficient to meet care partner needs. Future research should establish minimum quality standards across dimensions for mobile apps for care partners. The design features of high-quality apps we identified in this research can provide the foundation for benchmarking those standards.


Author(s):  
Ines Carvalho ◽  
Fernando Almeida

MHealth involves the provision of health products, services, and information through mobile and wireless technologies. Companies and institutions in the healthcare sector are progressively proposing innovative mhealth solutions that simultaneously reduce costs and improve the quality of life of citizens. In this chapter, a mobile app is proposed to promote healthy food habits through better management of the food each person has at home. This app intends to reduce food waste and promotes the development of good food practices based on the nutritional value of each recipe and the indication of potential allergies to ingredients. The development of the app was based on the best practices of Mobile UX, which is fundamental to offer intuitive interaction and rapid learning for the user. Furthermore, other factors also relevant in the context of mobile apps were considered in the development, namely usability, data backup, performance, security, scalability, and interoperability.


2020 ◽  
Author(s):  
Tehmina Gladman ◽  
Grace Tylee ◽  
Steve Gallagher ◽  
Jonathan Mair ◽  
Rebecca Grainger

BACKGROUND Mobile apps are widely used in health professions, which increases the need for simple methods to determine the quality of apps. In particular, teachers need the ability to curate high-quality mobile apps for student learning. OBJECTIVE This study aims to systematically search for and evaluate the quality of clinical skills mobile apps as learning tools. The quality of apps meeting the specified criteria was evaluated using two measures—the widely used Mobile App Rating Scale (MARS), which measures general app quality, and the Mobile App Rubric for Learning (MARuL), a recently developed instrument that measures the value of apps for student learning—to assess whether MARuL is more effective than MARS in identifying high-quality apps for learning. METHODS Two mobile app stores were systematically searched using clinical skills terms commonly found in medical education and apps meeting the criteria identified using an approach based on PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. A total of 9 apps were identified during the screening process. The apps were rated independently by 2 reviewers using MARS and MARuL. RESULTS The intraclass correlation coefficients (ICCs) for the 2 raters using MARS and MARuL were the same (MARS ICC [two-way]=0.68; <i>P</i>&lt;.001 and MARuL ICC [two-way]=0.68; <i>P</i>&lt;.001). Of the 9 apps, Geeky Medics-OSCE revision (MARS Android=3.74; MARS iOS=3.68; MARuL Android=75; and MARuL iOS=73) and OSCE PASS: Medical Revision (MARS Android=3.79; MARS iOS=3.71; MARuL Android=69; and MARuL iOS=73) scored highly on both measures of app quality and for both Android and iOS. Both measures also showed agreement for the lowest rated app, Patient Education Institute (MARS Android=2.21; MARS iOS=2.11; MARuL Android=18; and MARuL iOS=21.5), which had the lowest scores in all categories except information (MARS) and professional (MARuL) in both operating systems. MARS and MARuL were both able to differentiate between the highest and lowest quality apps; however, MARuL was better able to differentiate apps based on teaching and learning quality. CONCLUSIONS This systematic search and rating of clinical skills apps for learning found that the quality of apps was highly variable. However, 2 apps—Geeky Medics-OSCE revision and OSCE PASS: Medical Revision—rated highly for both versions and with both quality measures. MARS and MARuL showed similar abilities to differentiate the quality of the 9 apps. However, MARuL’s incorporation of teaching and learning elements as part of a multidimensional measure of quality may make it more appropriate for use with apps focused on teaching and learning, whereas MARS’s more general rating of quality may be more appropriate for health apps targeting a general health audience. Ratings of the 9 apps by both measures also highlighted the variable quality of clinical skills mobile apps for learning. CLINICALTRIAL


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246061
Author(s):  
Agustín Ciapponi ◽  
Manuel Donato ◽  
A. Metin Gülmezoglu ◽  
Tomás Alconada ◽  
Ariel Bardach

The use of substandard and counterfeit medicines (SCM) leads to significant health and economic consequences, like treatment failure, rise of antimicrobial resistance, extra expenditures of individuals or households and serious adverse drug reactions including death. Our objective was to systematically search, identify and compare relevant available mobile applications (apps) for smartphones and tablets, which use could potentially affect clinical and public health outcomes. We carried out a systematic review of the literature in January 2020, including major medical databases, and app stores. We used the validated Mobile App Rating Scale (MARS) to assess the quality of apps, (1 worst score, 3 acceptable score, and 5 best score). We planned to evaluate the accuracy of the mobile apps to detect SCM. We retrieved 335 references through medical databases and 42 from Apple, Google stores and Google Scholar. We finally included two studies of the medical database, 25 apps (eight from the App Store, eight from Google Play, eight from both stores, and one from Google Scholar), and 16 websites. We only found one report on the accuracy of a mobile apps detecting SCMs. Most apps use the imprint, color or shape for pill identification, and only a few offer pill detection through photographs or bar code. The MARS mean score for the apps was 3.17 (acceptable), with a maximum of 4.9 and a minimum of 1.1. The ‘functionality’ dimension resulted in the highest mean score (3.4), while the ‘engagement’ and ‘information’ dimensions showed the lowest one (3.0). In conclusion, we found a remarkable evidence gap about the accuracy of mobile apps in detecting SCMs. However, mobile apps could potentially be useful to screen for SCM by assessing the physical characteristics of pills, although this should still be assessed in properly designed research studies.


2021 ◽  
Author(s):  
Ko-Lin Wu ◽  
Rebeca Alegria ◽  
Jazzlyn Gonzalez ◽  
Harrison Hu ◽  
Haocen Wang ◽  
...  

BACKGROUND Prenatal genetic testing is an essential part of routine prenatal care. Yet, obstetricians often lack the time to provide comprehensive prenatal genetic testing education to their patients. Pregnant women lack prenatal genetic testing knowledge, which may hinder informed decision-making during their pregnancies. Due to the rapid growth of technology, mobile applications (apps) are a potentially valuable educational tool through which pregnant women can learn about prenatal genetic testing and improve the quality of their communication with obstetricians. The characteristics, quality, and number of available apps containing prenatal genetic testing information was, however, unknown. OBJECTIVE To conduct the first review to identify, evaluate, and summarize currently available prenatal genetic testing mobile apps using a systematic approach. METHODS We searched both the Apple App Store and Google Play to find mobile apps containing prenatal genetic testing information. The quality of apps was assessed based upon criteria adapted from two commonly used and validated mobile app scoring systems including “MARS” and “APPLICATIONS”. RESULTS Sixty-four mobile apps were identified. Of these, only two apps were developed for a specific prenatal genetic test. All other apps were either pregnancy-related (95.3%) or genetics (1.6%) apps that provided prenatal genetic testing information. The majority of the apps (76.5%) were developed by commercial companies. The mean quality assessment score of the included apps was 13.5, which was equal to the average of possible theoretical score. Overall, the main weaknesses of mobile apps in this review included the limited number of prenatal genetic tests mentioned, incomprehensiveness of testing information, unreliable and missing information sources, absence of developmental testing with users (not evidenced-based), high level of readability, and lack of visual information, customization, and a text search field. CONCLUSIONS Our findings suggest that the quality of prenatal genetic testing-related mobile apps must be improved, and that pregnant women should be cautious when utilizing these mobile apps for prenatal genetic testing information. Obstetricians should carefully examine mobile apps before referring any of them to their patients for use as an educational tool. Both improving the quality of existing mobile apps, and developing new, evidence-based, high-quality mobile apps targeting all prenatal genetic tests should be the focus of mobile app developers going forward.


Sign in / Sign up

Export Citation Format

Share Document