scholarly journals Escaping the McNamara Fallacy: Towards more Impactful Recommender Systems Research

AI Magazine ◽  
2020 ◽  
Vol 41 (4) ◽  
pp. 79-95
Author(s):  
Dietmar Jannach ◽  
Christine Bauer

Recommender systems are among today’s most successful application areas of artificial intelligence. However, in the recommender systems research community, we have fallen prey to a McNamara fallacy to a worrying extent: In the majority of our research efforts, we rely almost exclusively on computational measures such as prediction accuracy, which are easier to make than applying other evaluation methods. However, it remains unclear whether small improvements in terms of such computational measures matter greatly and whether they lead us to better systems in practice. A paradigm shift in terms of our research culture and goals is therefore needed. We can no longer focus exclusively on abstract computational measures but must direct our attention to research questions that are more relevant and have more impact in the real world. In this work, we review the various ways of how recommender systems may create value; how they, positively or negatively, impact consumers, businesses, and the society; and how we can measure the resulting effects. Through our analyses, we identify a number of research gaps and propose ways of broadening and improving our methodology in a way that leads us to more impactful research in our field.

2009 ◽  
pp. 1822-1834
Author(s):  
Leigh Jin ◽  
Daniel Robey ◽  
Marie-Claude Boudreau

Open source software has rapidly become a popular area of study within the information systems research community. Most of the research conducted so far has focused on the phenomenon of open source software development, rather than use. We argue for the importance of studying open source software use and propose a framework to guide research in this area. The framework describes four main areas of investigation: the creation of OSS user communities, their characteristics, their contributions and how they change. For each area of the framework, we suggest several research questions that deserve attention.


2021 ◽  
Vol 70 ◽  
Author(s):  
Jess Whittlestone ◽  
Kai Arulkumaran ◽  
Matthew Crosby

Deep Reinforcement Learning (DRL) is an avenue of research in Artificial Intelligence (AI) that has received increasing attention within the research community in recent years, and is beginning to show potential for real-world application. DRL is one of the most promising routes towards developing more autonomous AI systems that interact with and take actions in complex real-world environments, and can more flexibly solve a range of problems for which we may not be able to precisely specify a correct ‘answer’. This could have substantial implications for people’s lives: for example by speeding up automation in various sectors, changing the nature and potential harms of online influence, or introducing new safety risks in physical infrastructure. In this paper, we review recent progress in DRL, discuss how this may introduce novel and pressing issues for society, ethics, and governance, and highlight important avenues for future research to better understand DRL’s societal implications. This article appears in the special track on AI and Society.


Author(s):  
Tajul Rosli Razak ◽  
Mohammad Hafiz Ismail ◽  
Shukor Sanim Mohd Fauzi ◽  
Ray Adderley JM Gining ◽  
Ruhaila Maskat

<span lang="EN-GB">A recommender system is an algorithm aiming at giving suggestions to users on relevant elements or items such as products to purchase, books to read, jobs to apply or anything else depending on industries or situations. Recently, there has been a surge in interest in developing a recommender system in a variety of areas. One of the most widely used approaches in recommender systems is collaborative filtering (CF). The CF is a strategy for automatically creating a filter based on a user's needs by extracting desires or recommendation information from a large number of users. The CF approach uses multiple correlation steps to do this. However, the occurrence of uncertainty in finding the best similarity measure is unavoidable. This paper outlines a method for improving the configuration of a recommender system that is tasked with recommending an appropriate study field and supervisor to a group of final-year project students. The framework we suggest is built on a participatory design methodology that allows students' individual opinions to be factored into the recommender system's design. The architecture of the recommender scheme was also illustrated using a real-world scenario, namely mapping the students' field of interest to a possible supervisor for the final year project.</span>


2005 ◽  
Vol 20 (2) ◽  
pp. 127-141 ◽  
Author(s):  
DANNY WEYNS ◽  
MICHAEL SCHUMACHER ◽  
ALESSANDRO RICCI ◽  
MIRKO VIROLI ◽  
TOM HOLVOET

There is a growing awareness in the multiagent systems research community that the environment plays a prominent role in multiagent systems. Originating from research on behavior-based agent systems and situated multiagent systems, the importance of the environment is now gradually being accepted in the multiagent system community in general. In this paper, we put forward the environment as a first-order abstraction in multiagent systems. This position is motivated by the fact that several aspects of multiagent systems that conceptually do not belong to agents themselves should not be assigned to, or hosted inside the agents. Examples are infrastructure for communication, the topology of a spatial domain or support for the action model. These and other aspects should be considered explicitly. The environment is the natural candidate to encapsulate these aspects. We elaborate on environment engineering, and we illustrate how the environment plays a central role in a real-world multiagent system application.


2008 ◽  
pp. 1277-1289
Author(s):  
Leigh Jin ◽  
Daniel Robey ◽  
Marie-Claude Boudreau

Open source software has rapidly become a popular area of study within the information systems research community. Most of the research conducted so far has focused on the phenomenon of open source software development, rather than use. We argue for the importance of studying open source software use and propose a framework to guide research in this area. The framework describes four main areas of investigation: the creation of OSS user communities, their characteristics, their contributions and how they change. For each area of the framework, we suggest several research questions that deserve attention.


Author(s):  
Leigh Jin ◽  
Daniel Robey ◽  
Marie-Claude Boudreau

Open source software has rapidly become a popular area of study within the information systems research community. Most of the research conducted so far has focused on the phenomenon of open source software development, rather than use. We argue for the importance of studying open source software use and propose a framework to guide research in this area. The framework describes four main areas of investigation: the creation of OSS user communities, their characteristics, their contributions and how they change. For each area of the framework, we suggest several research questions that deserve attention.


10.28945/4505 ◽  
2020 ◽  
Vol 15 ◽  
pp. 039-064
Author(s):  
Rogerio Ferreira da Silva ◽  
Itana Maria de Souza Gimenes ◽  
José Carlos Maldonado

Aim/Purpose: This paper presents a study of Virtual Communities of Practice (VCoP) evaluation methods that aims to identify their current status and impact on knowledge sharing. The purposes of the study are as follows: (i) to identify trends and research gaps in VCoP evaluation methods; and, (ii) to assist researchers to position new research activities in this domain. Background: VCoP have become a popular knowledge sharing mechanism for both individuals and organizations. Their evaluation process is complex; however, it is recognized as an essential means to provide evidences of community effectiveness. Moreover, VCoP have introduced additional features to face to face Communities of Practice (CoP) that need to be taken into account in evaluation processes, such as geographical dispersion. The fact that VCoP rely on Information and Communication Technologies (ICT) to execute their practices as well as storing artifacts virtually makes more consistent data analysis possible; thus, the evaluation process can apply automatic data gathering and analysis. Methodology: A systematic mapping study, based on five research questions, was carried out in order to analyze existing studies about VCoP evaluation methods and frameworks. The mapping included searching five research databases resulting in the selection of 1,417 papers over which a formal analysis process was applied. This process led to the preliminary selection of 39 primary studies for complete reading. After reading them, we select 28 relevant primary studies from which data was extracted and synthesized to answer the proposed research questions. Contribution: The authors of the primary studies analyzed along this systematic mapping propose a set of methods and strategies for evaluating VCoP, such as frameworks, processes and maturity models. Our main contribution is the identification of some research gaps present in the body of studies, in order to stimulate projects that can improve VCoP evaluation methods and support its important role in social learning. Findings: The systematic mapping led to the conclusion that most of the approaches for VCoP evaluation do not consider the combination of data structured and unstructured metrics. In addition, there is a lack of guidelines to support community operators’ actions based on evaluation metrics.


Author(s):  
Tan Yigitcanlar ◽  
Juan M. Corchado ◽  
Rashid Mehmood ◽  
Rita Yi Man Li ◽  
Karen Mossberger ◽  
...  

The urbanization problems we face may be alleviated using innovative digital technology. However, employing these technologies entails the risk of creating new urban problems and/or intensifying the old ones instead of alleviating them. Hence, in a world with immense technological opportunities and at the same time enormous urbanization challenges, it is critical to adopt the principles of responsible urban innovation. These principles assure the delivery of the desired urban outcomes and futures. We contribute to the existing responsible urban innovation discourse by focusing on local government artificial intelligence (AI) systems, providing a literature and practice overview, and a conceptual framework. In this perspective paper, we advocate for the need for balancing the costs, benefits, risks and impacts of developing, adopting, deploying and managing local government AI systems in order to achieve responsible urban innovation. The statements made in this perspective paper are based on a thorough review of the literature, research, developments, trends and applications carefully selected and analyzed by an expert team of investigators. This study provides new insights, develops a conceptual framework and identifies prospective research questions by placing local government AI systems under the microscope through the lens of responsible urban innovation. The presented overview and framework, along with the identified issues and research agenda, offer scholars prospective lines of research and development; where the outcomes of these future studies will help urban policymakers, managers and planners to better understand the crucial role played by local government AI systems in ensuring the achievement of responsible outcomes.


2021 ◽  
Vol 77 (18) ◽  
pp. 3045
Author(s):  
Oguz Akbilgic ◽  
Liam Butler ◽  
Ibrahim Karabayir ◽  
Patricia Chang ◽  
Dalane Kitzman ◽  
...  

2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Albert T. Young ◽  
Kristen Fernandez ◽  
Jacob Pfau ◽  
Rasika Reddy ◽  
Nhat Anh Cao ◽  
...  

AbstractArtificial intelligence models match or exceed dermatologists in melanoma image classification. Less is known about their robustness against real-world variations, and clinicians may incorrectly assume that a model with an acceptable area under the receiver operating characteristic curve or related performance metric is ready for clinical use. Here, we systematically assessed the performance of dermatologist-level convolutional neural networks (CNNs) on real-world non-curated images by applying computational “stress tests”. Our goal was to create a proxy environment in which to comprehensively test the generalizability of off-the-shelf CNNs developed without training or evaluation protocols specific to individual clinics. We found inconsistent predictions on images captured repeatedly in the same setting or subjected to simple transformations (e.g., rotation). Such transformations resulted in false positive or negative predictions for 6.5–22% of skin lesions across test datasets. Our findings indicate that models meeting conventionally reported metrics need further validation with computational stress tests to assess clinic readiness.


Sign in / Sign up

Export Citation Format

Share Document