Towards individualized information navigation tools that adapt to user domain expertise

Author(s):  
K. Voigt
Author(s):  
Mansoureh Maadi ◽  
Hadi Akbarzadeh Khorshidi ◽  
Uwe Aickelin

Objective: To provide a human–Artificial Intelligence (AI) interaction review for Machine Learning (ML) applications to inform how to best combine both human domain expertise and computational power of ML methods. The review focuses on the medical field, as the medical ML application literature highlights a special necessity of medical experts collaborating with ML approaches. Methods: A scoping literature review is performed on Scopus and Google Scholar using the terms “human in the loop”, “human in the loop machine learning”, and “interactive machine learning”. Peer-reviewed papers published from 2015 to 2020 are included in our review. Results: We design four questions to investigate and describe human–AI interaction in ML applications. These questions are “Why should humans be in the loop?”, “Where does human–AI interaction occur in the ML processes?”, “Who are the humans in the loop?”, and “How do humans interact with ML in Human-In-the-Loop ML (HILML)?”. To answer the first question, we describe three main reasons regarding the importance of human involvement in ML applications. To address the second question, human–AI interaction is investigated in three main algorithmic stages: 1. data producing and pre-processing; 2. ML modelling; and 3. ML evaluation and refinement. The importance of the expertise level of the humans in human–AI interaction is described to answer the third question. The number of human interactions in HILML is grouped into three categories to address the fourth question. We conclude the paper by offering a discussion on open opportunities for future research in HILML.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4890
Author(s):  
Athanasios Dimitriadis ◽  
Christos Prassas ◽  
Jose Luis Flores ◽  
Boonserm Kulvatunyou ◽  
Nenad Ivezic ◽  
...  

Cyber threat information sharing is an imperative process towards achieving collaborative security, but it poses several challenges. One crucial challenge is the plethora of shared threat information. Therefore, there is a need to advance filtering of such information. While the state-of-the-art in filtering relies primarily on keyword- and domain-based searching, these approaches require sizable human involvement and rarely available domain expertise. Recent research revealed the need for harvesting of business information to fill the gap in filtering, albeit it resulted in providing coarse-grained filtering based on the utilization of such information. This paper presents a novel contextualized filtering approach that exploits standardized and multi-level contextual information of business processes. The contextual information describes the conditions under which a given threat information is actionable from an organization perspective. Therefore, it can automate filtering by measuring the equivalence between the context of the shared threat information and the context of the consuming organization. The paper directly contributes to filtering challenge and indirectly to automated customized threat information sharing. Moreover, the paper proposes the architecture of a cyber threat information sharing ecosystem that operates according to the proposed filtering approach and defines the characteristics that are advantageous to filtering approaches. Implementation of the proposed approach can support compliance with the Special Publication 800-150 of the National Institute of Standards and Technology.


2016 ◽  
Vol 16 (4) ◽  
pp. 219-224 ◽  
Author(s):  
Alex Smith

AbstractIn a world where articles and tweets are discussing how artificial intelligence technology will replace humans, including lawyers and their support functions in firms, it can be hard to understand what the future holds. This article, written by Alex Smith, is based on his presentation at the British and Irish Association of Law Librarians conference in Dublin 2016 and looks at demystifying the emerging technology boom and identifies the expertise needed to make these tools work and be deployed in law firms. The article then looks at the skills and expertise of the knowledge and information teams, based in law firms, and suggests how they are ideally placed to lead these challenges as a result of their domain expertise and their existing, well defined skills that are essential to this new generation of technology. The article looks at the new technical environment, the emerging areas of products and legal problems, the skills needed for the new roles that this revolution is creating and how this could fit into a reimagined knowledge team.


2021 ◽  
Author(s):  
Hayley Weir ◽  
Keiran Thompson ◽  
Amelia Woodward ◽  
Benjamin Choi ◽  
Augustin Braun ◽  
...  

Inputting molecules into chemistry software, such as quantum chemistry packages, currently requires domain expertise, expensive software and/or cumbersome procedures. Leveraging recent breakthroughs in machine learning, we develop ChemPix: an offline,...


2021 ◽  
Vol 13 (1) ◽  
pp. 1-25
Author(s):  
Michael Loster ◽  
Ioannis Koumarelas ◽  
Felix Naumann

The integration of multiple data sources is a common problem in a large variety of applications. Traditionally, handcrafted similarity measures are used to discover, merge, and integrate multiple representations of the same entity—duplicates—into a large homogeneous collection of data. Often, these similarity measures do not cope well with the heterogeneity of the underlying dataset. In addition, domain experts are needed to manually design and configure such measures, which is both time-consuming and requires extensive domain expertise. We propose a deep Siamese neural network, capable of learning a similarity measure that is tailored to the characteristics of a particular dataset. With the properties of deep learning methods, we are able to eliminate the manual feature engineering process and thus considerably reduce the effort required for model construction. In addition, we show that it is possible to transfer knowledge acquired during the deduplication of one dataset to another, and thus significantly reduce the amount of data required to train a similarity measure. We evaluated our method on multiple datasets and compare our approach to state-of-the-art deduplication methods. Our approach outperforms competitors by up to +26 percent F-measure, depending on task and dataset. In addition, we show that knowledge transfer is not only feasible, but in our experiments led to an improvement in F-measure of up to +4.7 percent.


Sign in / Sign up

Export Citation Format

Share Document