Quality and User Experience
Latest Publications


TOTAL DOCUMENTS

48
(FIVE YEARS 25)

H-INDEX

6
(FIVE YEARS 2)

Published By Springer-Verlag

2366-0147, 2366-0139

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Simon Emberton ◽  
Christopher Simons

AbstractWithin the worldwide diving community, underwater photography is becoming increasingly popular. However, the marine environment presents certain challenges for image capture, with resulting imagery often suffering from colour distortions, low contrast and blurring. As a result, image enhancement software is used not only to enhance the imagery aesthetically, but also to address these degradations. Although feature-rich image enhancement software products are available, little is known about the user experience of underwater photographers when interacting with such tools. To address this gap, we conducted an online questionnaire to better understand what software tools are being used, and face-to-face interviews to investigate the characteristics of the image enhancement user experience for underwater photographers. We analysed the interview transcripts using the pragmatic and hedonic categories from the frameworks of Hassenzahl (Funology, Kluwer Academic Publishers, Dordrecht, pp 31–42, 2003; Funology 2, Springer, pp 301–313, 2018) for positive and negative user experience. Our results reveal a moderately negative experience overall for both pragmatic and hedonic categories. We draw some insights from the findings and make recommendations for improving the user experience for underwater photographers using image enhancement tools.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Virpi Roto ◽  
Johanna Bragge ◽  
Yichen Lu ◽  
Darius Pacauskas

AbstractHuman experiences have been studied in multiple disciplines, Human–Computer Interaction (HCI) being one of the largest research fields with its user experience (UX) research. Currently, there is little interaction between experience researchers from different disciplines, although cross-disciplinary knowledge sharing has the potential to accelerate the development of UX and other experience research fields to the next level. This article reports a research profiling study of almost 52,000 experience publications over 125 years, showing the breadth of experience research across disciplines. The data analysis reveals the disciplines that study experiences, the prominent authors, institutions and countries in experience research, the most cited works by experience researchers across disciplines, and how UX research is situated on the map of experience research. This descriptive research profiling study is a necessary first step on the journey of mapping the landscape of experience research, guiding researchers towards understanding experience as a multidisciplinary concept, and establishing a more coherent experience research field.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Asbjørn Følstad ◽  
Cameron Taylor

AbstractThe uptake of chatbots for customer service depends on the user experience. For such chatbots, user experience in particular concerns whether the user is provided relevant answers to their queries and the chatbot interaction brings them closer to resolving their problem. Dialogue data from interactions between users and chatbots represents a potentially valuable source of insight into user experience. However, there is a need for knowledge of how to make use of these data. Motivated by this, we present a framework for qualitative analysis of chatbot dialogues in the customer service domain. The framework has been developed across several studies involving two chatbots for customer service, in collaboration with the chatbot hosts. We present the framework and illustrate its application with insights from three case examples. Through the case findings, we show how the framework may provide insight into key drivers of user experience, including response relevance and dialogue helpfulness (Case 1), insight to drive chatbot improvement in practice (Case 2), and insight of theoretical and practical relevance for understanding chatbot user types and interaction patterns (Case 3). On the basis of the findings, we discuss the strengths and limitations of the framework, its theoretical and practical implications, and directions for future work.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Jolien De Letter ◽  
Aleksandra Zheleva ◽  
Mathias Maes ◽  
Anissa All ◽  
Lieven De Marez ◽  
...  

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Michael Seufert

AbstractDue to biased assumptions on the underlying ordinal rating scale in subjective Quality of Experience (QoE) studies, Mean Opinion Score (MOS)-based evaluations provide results, which are hard to interpret and can be misleading. This paper proposes to consider the full QoE distribution for evaluating, reporting, and modeling QoE results instead of relying on MOS-based metrics derived from results based on ordinal rating scales. The QoE distribution can be represented in a concise way by using the parameters of a multinomial distribution without losing any information about the underlying QoE ratings, and even keeps backward compatibility with previous, biased MOS-based results. Considering QoE results as a realization of a multinomial distribution allows to rely on a well-established theoretical background, which enables meaningful evaluations also for ordinal rating scales. Moreover, QoE models based on QoE distributions keep detailed information from the results of a QoE study of a technical system, and thus, give an unprecedented richness of insights into the end users’ experience with the technical system. In this work, existing and novel statistical methods for QoE distributions are summarized and exemplary evaluations are outlined. Furthermore, using the novel concept of quality steps, simulative and analytical QoE models based on QoE distributions are presented and showcased. The goal is to demonstrate the fundamental advantages of considering QoE distributions over MOS-based evaluations if the underlying rating data is ordinal in nature.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Babak Naderi ◽  
Rafael Zequeira Jiménez ◽  
Matthias Hirth ◽  
Sebastian Möller ◽  
Florian Metzger ◽  
...  

AbstractSubjective speech quality assessment has traditionally been carried out in laboratory environments under controlled conditions. With the advent of crowdsourcing platforms tasks, which need human intelligence, can be resolved by crowd workers over the Internet. Crowdsourcing also offers a new paradigm for speech quality assessment, promising higher ecological validity of the quality judgments at the expense of potentially lower reliability. This paper compares laboratory-based and crowdsourcing-based speech quality assessments in terms of comparability of results and efficiency. For this purpose, three pairs of listening-only tests have been carried out using three different crowdsourcing platforms and following the ITU-T Recommendation P.808. In each test, listeners judge the overall quality of the speech sample following the Absolute Category Rating procedure. We compare the results of the crowdsourcing approach with the results of standard laboratory tests performed according to the ITU-T Recommendation P.800. Results show that in most cases, both paradigms lead to comparable results. Notable differences are discussed with respect to their sources, and conclusions are drawn that establish practical guidelines for crowdsourcing-based speech quality assessment.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Kathrin Borchert ◽  
Anika Seufert ◽  
Edwin Gamboa ◽  
Matthias Hirth ◽  
Tobias Hoßfeld

Abstract Evaluating the Quality of Experience (QoE) of video streaming and its influence factors has become paramount for streaming providers, as they want to maintain high satisfaction for their customers. In this context, crowdsourced user studies became a valuable tool to evaluate different factors which can affect the perceived user experience on a large scale. In general, most of these crowdsourcing studies either use, what we refer to, as an in vivo or an in vitro interface design. In vivo design means that the study participant has to rate the QoE of a video that is embedded in an application similar to a real streaming service, e.g., YouTube or Netflix. In vitro design refers to a setting, in which the video stream is separated from a specific service and thus, the video plays on a plain background. Although these interface designs vary widely, the results are often compared and generalized. In this work, we use a crowdsourcing study to investigate the influence of three interface design alternatives, an in vitro and two in vivo designs with different levels of interactiveness, on the perceived video QoE. Contrary to our expectations, the results indicate that there is no significant influence of the study’s interface design in general on the video experience. Furthermore, we found that the in vivo design does not reduce the test takers’ attentiveness. However, we observed that participants who interacted with the test interface reported a higher video QoE than other groups.


2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Alexandre De Masi ◽  
Katarzyna Wac

Abstract Progressively, smartphones have become the pocket Swiss army knife for everyone. They support their users needs to accomplish tasks in numerous contexts. However, the applications executing those tasks are regularly not performing as they should, and the user-perceived experience is altered. In this paper, we present our approach to model and predict the Quality of Experience (QoE) of mobile applications used over WiFi or cellular network. We aimed to create predictive QoE models and to derive recommendations for mobile application developers to create QoE aware applications. Previous works on smartphone applications’ QoE prediction only focus on qualitative or quantitative data. We collected both qualitative and quantitative data “in the wild“ through our living lab. We ran a 4-week-long study with 38 Android phone users. We focused on frequently used and highly interactive applications. The participants rated their mobile applications’ expectation and QoE and in various contexts resulting in a total of 6086 ratings. Simultaneously, our smartphone logger (mQoL-Log) collected background information such as network information, user physical activity, battery statistics, and more. We apply various data aggregation approaches and features selection processes to train multiple predictive QoE models. We obtain better model performances using ratings acquired within 14.85 minutes after the application usage. Additionally, we boost our models’ performance with the users expectation as a new feature. We create an on-device prediction model with on-smartphone only features. We compare its performance metrics against the previous model. The on-device model performs below the full features models. Surprisingly, among the following top three features: the intended task to accomplish with the app, application’s name (e.g., WhatsApp, Spotify), and network Quality of Service (QoS), the user physical activity is the most important feature (e.g., if walking). Finally, we share our recommendations with the application developers, and we discuss the implications of QoE and expectations in mobile application design.


Sign in / Sign up

Export Citation Format

Share Document