UXmood - A Tool to Investigate the User Experience (UX) Based on Multimodal Sentiment Analysis and Information Visualization (InfoVis)

Author(s):  
Roberto Yuri Da Silva Franco ◽  
Alexandre Abreu De Freitas ◽  
Rodrigo Santos Do Amor Divino Lima ◽  
Marcelle Pereira Mota ◽  
Carlos Gustavo Resque Dos Santos ◽  
...  
2021 ◽  
Author(s):  
Elton Lobo ◽  
Mohamed Abdelrazek ◽  
Anne Frølich ◽  
Lene Juel Rasmussen ◽  
Patricia M. Livingston ◽  
...  

BACKGROUND Stroke caregivers often experience negative impacts when caring for a person living with a stroke. Technologically based interventions such as mHealth apps have demonstrated potential in supporting the caregivers during the recovery trajectory. Hence, there is an increase in apps in popular app stores, with a few apps addressing the healthcare needs of stroke caregivers. Since most of these apps were published without explanation of their design and evaluation processes, it is necessary to identify the usability and user experience issues to help app developers and researchers to understand the factors that affect long-term adherence and usage in stroke caregiving technology. OBJECTIVE The purpose of this study was to determine the usability and user experience issues in commercially available mHealth apps from the user reviews published within the app store to help researchers and developers understand the factors that may affect long-term adherence and usage. METHODS User reviews were extracted from the previously identified 47 apps that support stroke caregiving needs using a python-scraper for both app stores (i.e. Google Play Store and Apple App Store). The reviews were pre-processed to (i) clean the dataset and ensure unicode normalization, (ii) remove stop words and (iii) group words together with similar meanings. The pre-processed reviews were filtered using sentiment analysis to exclude positive and non-English reviews. The final corpus was classified based on usability and user experience dimensions to highlight issues within the app. RESULTS Of 1,385,337 user reviews, only 162,095 were extracted due to the limitations in the app store. After filtration based on the sentiment analysis, 15,818 reviews were included in the study and were filtered based on the usability and user experience dimensions. Findings from the usability and user experience dimensions highlight critical errors/effectiveness, efficiency and support that contribute to decreased satisfaction, affect and emotion and frustration in using the app. CONCLUSIONS Commercially available mHealth apps consist of several usability and user experience issues due to their inability to understand the methods to address the healthcare needs of the caregivers. App developers need to consider participatory design approaches to promote user participation in design. This might ensure better understanding of the user needs and methods to support these needs; therefore, limiting any issues and ensuring continued use.


2020 ◽  
Vol 179 ◽  
pp. 02013
Author(s):  
Yi Zou ◽  
Na Qi

The visual design of the infographic is designed to compress complex information and present it to the audience through an intuitive and easy-to-understand expression, so that they can effectively absorb the content therein. With the continuous development of science and information visualization technology, the production methods and presentation forms of information charts have become more and more abundant, and the direction from two-dimensional information charts to multi-dimensional information charts and dynamic information charts has continuously evolved. This paper cuts in from the perspective of user experience, and proposes optimization suggestions for the current status of visual design of infographics.


2018 ◽  
Vol 17 (03) ◽  
pp. 883-910 ◽  
Author(s):  
P. D. Mahendhiran ◽  
S. Kannimuthu

Contemporary research in Multimodal Sentiment Analysis (MSA) using deep learning is becoming popular in Natural Language Processing. Enormous amount of data are obtainable from social media such as Facebook, WhatsApp, YouTube, Twitter and microblogs every day. In order to deal with these large multimodal data, it is difficult to identify the relevant information from social media websites. Hence, there is a need to improve an intellectual MSA. Here, Deep Learning is used to improve the understanding and performance of MSA better. Deep Learning delivers automatic feature extraction and supports to achieve the best performance to enhance the combined model that integrates Linguistic, Acoustic and Video information extraction method. This paper focuses on the various techniques used for classifying the given portion of natural language text, audio and video according to the thoughts, feelings or opinions expressed in it, i.e., whether the general attitude is Neutral, Positive or Negative. From the results, it is perceived that Deep Learning classification algorithm gives better results compared to other machine learning classifiers such as KNN, Naive Bayes, Random Forest, Random Tree and Neural Net model. The proposed MSA in deep learning is to identify sentiment in web videos which conduct the poof-of-concept experiments that proved, in preliminary experiments using the ICT-YouTube dataset, our proposed multimodal system achieves an accuracy of 96.07%.


Sign in / Sign up

Export Citation Format

Share Document