scholarly journals Human-Centered Recommender Systems: Origins, Advances, Challenges, and Opportunities

AI Magazine ◽  
2021 ◽  
Vol 42 (3) ◽  
pp. 31-42
Author(s):  
Joseph Konstan ◽  
Loren Terveen

From the earliest days of the field, Recommender Systems research and practice has struggled to balance and integrate approaches that focus on recommendation as a machine learning or missing-value problem with ones that focus on machine learning as a discovery tool and perhaps persuasion platform. In this article, we review 25 years of recommender systems research from a human-centered perspective, looking at the interface and algorithm studies that advanced our understanding of how system designs can be tailored to users objectives and needs. At the same time, we show how external factors, including commercialization and technology developments, have shaped research on human-centered recommender systems. We show how several unifying frameworks have helped developers and researchers alike incorporate thinking about user experience and human decision-making into their designs. We then review the challenges, and the opportunities, in today’s recommenders, looking at how deep learning and optimization techniques can integrate with both interface designs and human performance statistics to improve recommender effectiveness and usefulness

Author(s):  
Sumi Helal ◽  
Flavia C. Delicato ◽  
Cintia B. Margi ◽  
Satyajayant Misra ◽  
Markus Endler

Author(s):  
Lissette Almonte ◽  
Esther Guerra ◽  
Iván Cantador ◽  
Juan de Lara

AbstractRecommender systems are information filtering systems used in many online applications like music and video broadcasting and e-commerce platforms. They are also increasingly being applied to facilitate software engineering activities. Following this trend, we are witnessing a growing research interest on recommendation approaches that assist with modelling tasks and model-based development processes. In this paper, we report on a systematic mapping review (based on the analysis of 66 papers) that classifies the existing research work on recommender systems for model-driven engineering (MDE). This study aims to serve as a guide for tool builders and researchers in understanding the MDE tasks that might be subject to recommendations, the applicable recommendation techniques and evaluation methods, and the open challenges and opportunities in this field of research.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Mirjam Pot ◽  
Nathalie Kieusseyan ◽  
Barbara Prainsack

AbstractThe application of machine learning (ML) technologies in medicine generally but also in radiology more specifically is hoped to improve clinical processes and the provision of healthcare. A central motivation in this regard is to advance patient treatment by reducing human error and increasing the accuracy of prognosis, diagnosis and therapy decisions. There is, however, also increasing awareness about bias in ML technologies and its potentially harmful consequences. Biases refer to systematic distortions of datasets, algorithms, or human decision making. These systematic distortions are understood to have negative effects on the quality of an outcome in terms of accuracy, fairness, or transparency. But biases are not only a technical problem that requires a technical solution. Because they often also have a social dimension, the ‘distorted’ outcomes they yield often have implications for equity. This paper assesses different types of biases that can emerge within applications of ML in radiology, and discusses in what cases such biases are problematic. Drawing upon theories of equity in healthcare, we argue that while some biases are harmful and should be acted upon, others might be unproblematic and even desirable—exactly because they can contribute to overcome inequities.


2021 ◽  
Author(s):  
Lun Ai ◽  
Stephen H. Muggleton ◽  
Céline Hocquette ◽  
Mark Gromowski ◽  
Ute Schmid

AbstractGiven the recent successes of Deep Learning in AI there has been increased interest in the role and need for explanations in machine learned theories. A distinct notion in this context is that of Michie’s definition of ultra-strong machine learning (USML). USML is demonstrated by a measurable increase in human performance of a task following provision to the human of a symbolic machine learned theory for task performance. A recent paper demonstrates the beneficial effect of a machine learned logic theory for a classification task, yet no existing work to our knowledge has examined the potential harmfulness of machine’s involvement for human comprehension during learning. This paper investigates the explanatory effects of a machine learned theory in the context of simple two person games and proposes a framework for identifying the harmfulness of machine explanations based on the Cognitive Science literature. The approach involves a cognitive window consisting of two quantifiable bounds and it is supported by empirical evidence collected from human trials. Our quantitative and qualitative results indicate that human learning aided by a symbolic machine learned theory which satisfies a cognitive window has achieved significantly higher performance than human self learning. Results also demonstrate that human learning aided by a symbolic machine learned theory that fails to satisfy this window leads to significantly worse performance than unaided human learning.


Author(s):  
Ryan Mullins ◽  
Deirdre Kelliher ◽  
Ben Nargi ◽  
Mike Keeney ◽  
Nathan Schurr

Recently, cyber reasoning systems demonstrated near-human performance characteristics when they autonomously identified, proved, and mitigated vulnerabilities in software during a competitive event. New research seeks to augment human vulnerability research teams with cyber reasoning system teammates in collaborative work environments. However, the literature lacks a concrete understanding of vulnerability research workflows and practices, limiting designers’, engineers’, and researchers’ ability to successfully integrate these artificially intelligent entities into teams. This paper contributes a general workflow model of the vulnerability research process, and identifies specific collaboration challenges and opportunities anchored in this model. Contributions were derived from a qualitative field study of work habits, behaviors, and practices of human vulnerability research teams. These contributions will inform future work in the vulnerability research domain by establishing an empirically-driven workflow model that can be adapted to specific organizational and functional constraints placed on individual and teams.


Author(s):  
Peter Brusilovsky ◽  
Marco de Gemmis ◽  
Alexander Felfernig ◽  
Pasquale Lops ◽  
John O'Donovan ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document