human expert
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 37)

H-INDEX

14
(FIVE YEARS 3)

Author(s):  
Rory Wilding ◽  
Vivek M. Sheraton ◽  
Lysabella Soto ◽  
Niketa Chotai ◽  
Ern Yu Tan

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Marta Lucchetta ◽  
Marco Pellegrini

AbstractComputational drug repositioning aims at ranking and selecting existing drugs for novel diseases or novel use in old diseases. In silico drug screening has the potential for speeding up considerably the shortlisting of promising candidates in response to outbreaks of diseases such as COVID-19 for which no satisfactory cure has yet been found. We describe DrugMerge as a methodology for preclinical computational drug repositioning based on merging multiple drug rankings obtained with an ensemble of disease active subnetworks. DrugMerge uses differential transcriptomic data on drugs and diseases in the context of a large gene co-expression network. Experiments with four benchmark diseases demonstrate that our method detects in first position drugs in clinical use for the specified disease, in all four cases. Application of DrugMerge to COVID-19 found rankings with many drugs currently in clinical trials for COVID-19 in top positions, thus showing that DrugMerge can mimic human expert judgment.


2021 ◽  
Author(s):  
Thomas Gengenbach ◽  
Kerstin Schoch

Previous studies on classification of fine art show that features of paintings can be captured and categorized using machine learning approaches. This progress can also benefit art psychology by facilitating data collection on artworks without the need to recruit experts as raters. In this study a machine learning approach is used to predict the ratings of RizbA, a Rating instrument for two-dimensional pictorial works. Based on a pre-trained model, the algorithm was fine-tuned via transfer learning on 886 pictorial works by contemporary professional artists and non-professionals. As quality criterion, artificial intelligence raters (ART) are compared with generic raters (GR) created from the real human expert raters, using error rate and mean squared error (MSE). ART ratings have been found to have the same error range as randomly chosen human ratings. Therefore, they can be seen as equivalent to real human expert raters for almost all items in RizbA. Further training with more data will close the gap to the human raters on all items.


Author(s):  
Devia Kartika ◽  
Rima Liana Gema ◽  
Mutiana Pratiwi

Expert system is a computer program which is designed for modelling the ability of problem solving as it is an expert (human expert). The expert system method used is the forward chaining method which is the inference method that is doing logical reasoning from the problem to its solution. The aim of this research is to design and develop an expert system that is able to identify the severe malnutrition on children from the age of 0 - 5 years old. The knowledge is derived from the question askedto a nutrition expert. The data are taken from the questions asked to the user and when all of the questions has been answered, then the goal will be appeared which shows the nutrition status. This system application will enable the user to diagnose the nutrition/disease that affects children and get the solution. This system can be used by any kind of user due to the easy access. This system is also put the important information about the severe malnutrition and the recent news of children’s health so it will add more knowledge for the parents about the importance of severe malnutrition’s prevention.


Author(s):  
Clara Xiaoling Chen ◽  
Ryan Hudgins ◽  
William F. Wright

We use an experiment to examine how advice valence (i.e. whether the advice suggests good news or bad news) affects the perceived source credibility of data analytics compared to human experts as a result of motivated reasoning. We predict that individuals will perceive data analytics as less credible than human experts, but only when the advice suggests bad news. Using a forecasting task in which individuals are seeking advice from either a human expert or data analytics, we find evidence consistent with our prediction. Furthermore, we find that this effect is mediated by the perceived competence of the advice source. We contribute to the nascent accounting literature on data analytics by providing evidence on a potential impediment to successfully transitioning to the use of analytics for decision-making in organizations.


2021 ◽  
Vol 16 (5) ◽  
pp. 1912-1928
Author(s):  
Namhee Yoon ◽  
Ha-Kyung Lee

This study investigated the effect of perceived technology quality and personalization quality on behavioral intentions, mediated by perceived empathy in using an artificial intelligence (AI) recommendation service. The study was based on a theoretical model of artificial intelligent device use acceptance. We also tested the moderating effect of individuals’ need for cognition, influencing empathy. Data collection was conducted through an online survey using a nationally recognized consumer research panel service in Korea. The participants were asked to respond to their preferences and needs on sneakers; then, they randomly experienced the AI (versus human expert) recommendation service that offers a recommended product. A total of 200 data were analyzed using SPSS 21.0 for descriptive statistics, reliability analysis, and PROCESS analysis, and AMOS 21.0 for confirmatory factor analysis and structural equation modeling (SEM). Results revealed that, compared with the human (expert) recommendation service, the AI recommendation service increased perceived technology quality, which increased personalization quality. Technology and personalization quality had a positive influence on behavioral intentions, mediated by perceived empathy. In addition, when individuals had a high level of need for cognition, the effect of personalization quality on empathy was stronger. However, individuals with a low level of need for cognition perceived greater empathy, as technology quality increased. The findings of the current study improve understanding of how consumers accept AI technology-driven services in the online shopping context.


2021 ◽  
Author(s):  
David Johnstone ◽  
Stewart Jones ◽  
Oliver Jones ◽  
Steve Tulig

The purpose of our paper is to describe a probability scoring rule that reflects the economic performance of a hypothetical investor who acts upon the probability forecasts emanating from a given model or human expert by trading against a market-clearing consensus of competing models and forecasts. The probability forecasts being compared are aggregated by an equilibrium condition into a market consensus reflecting the wisdom of the crowd. A good forecaster (model or human expert) is measured as one who allows the user to bet profitably against the market consensus. By asking forecasts to beat the market, forecasters are discouraged from herding and motivated to obtain better information than rival forecasters. We illustrate and prove that each trader’s personal incentive to hedge or fudge disappears when the number of forecasts in the market is sufficiently large. Our score exhibits the forecaster’s ability to assist economically profitable action and reveals how the user’s profits depend strongly on the accuracy of the forecasts and the decision rule (boldness) with which they are acted upon.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3834
Author(s):  
Michał Bukowski ◽  
Jarosław Kurek ◽  
Izabella Antoniuk ◽  
Albina Jegorowa

This paper presents a novel approach to the assessment of decision confidence when multi-class recognition is concerned. When many classification problems are considered, while eliminating human interaction with the system might be one goal, it is not the only possible option—lessening the workload of human experts can also bring huge improvement to the production process. The presented approach focuses on providing a tool that will significantly decrease the amount of work that the human expert needs to conduct while evaluating different samples. Instead of hard classification, which assigns a single label to each class, the described solution focuses on evaluating each case in terms of decision confidence—checking how sure the classifier is in the case of the currently processed example, and deciding if the final classification should be performed, or if the sample should instead be manually evaluated by a human expert. The method can be easily adjusted to any number of classes. It can also focus either on the classification accuracy or coverage of the used dataset, depending on user preferences. Different confidence functions are evaluated in that aspect. The results obtained during experiments meet the initial criteria, providing an acceptable quality for the final solution.


Sign in / Sign up

Export Citation Format

Share Document