scholarly journals Physiological Indicators for User Trust in Machine Learning with Influence Enhanced Fact-Checking

Author(s):  
Jianlong Zhou ◽  
Huaiwen Hu ◽  
Zhidong Li ◽  
Kun Yu ◽  
Fang Chen
Author(s):  
Robert S. Gutzwiller ◽  
John Reeder

Objective We examined a method of machine learning (ML) to evaluate its potential to develop more trustworthy control of unmanned vehicle area search behaviors. Background ML typically lacks interaction with the user. Novel interactive machine learning (IML) techniques incorporate user feedback, enabling observation of emerging ML behaviors, and human collaboration during ML of a task. This may enable trust and recognition of these algorithms. Method Participants judged and selected behaviors in a low and a high interaction condition (IML) over the course of behavior evolution using ML. User trust in the outputs, as well as preference, and ability to discriminate and recognize the behaviors were measured. Results Compared to noninteractive techniques, IML behaviors were more trusted and preferred, as well as recognizable, separate from non-IML behaviors, and approached similar performance as pure ML models. Conclusion IML shows promise for creating behaviors by involving the user; this is the first extension of this technique for vehicle behavior model development targeting user satisfaction and is unique in its multifaceted evaluation of how users perceived, trusted, and implemented these learned controllers. Application There are many contexts where the brittleness of ML cannot be trusted, but the advantage of ML over traditional programmed behaviors may be large, as in some military operations where they could be scaled. IML in this early form appears to generate satisfactory behaviors without sacrificing performance, use, or trust in the behavior, but more work is necessary.


2020 ◽  
Vol 10 (8) ◽  
pp. 2890
Author(s):  
Jongseong Gwak ◽  
Akinari Hirao ◽  
Motoki Shino

Drowsy driving is one of the main causes of traffic accidents. To reduce such accidents, early detection of drowsy driving is needed. In previous studies, it was shown that driver drowsiness affected driving performance, behavioral indices, and physiological indices. The purpose of this study is to investigate the feasibility of classification of the alert states of drivers, particularly the slightly drowsy state, based on hybrid sensing of vehicle-based, behavioral, and physiological indicators with consideration for the implementation of these identifications into a detection system. First, we measured the drowsiness level, driving performance, physiological signals (from electroencephalogram and electrocardiogram results), and behavioral indices of a driver using a driving simulator and driver monitoring system. Next, driver alert and drowsy states were identified by machine learning algorithms, and a dataset was constructed from the extracted indices over a period of 10 s. Finally, ensemble algorithms were used for classification. The results showed that the ensemble algorithm can obtain 82.4% classification accuracy using hybrid methods to identify the alert and slightly drowsy states, and 95.4% accuracy classifying the alert and moderately drowsy states. Additionally, the results show that the random forest algorithm can obtain 78.7% accuracy when classifying the alert vs. slightly drowsy states if physiological indicators are excluded and can obtain 89.8% accuracy when classifying the alert vs. moderately drowsy states. These results represent the feasibility of highly accurate early detection of driver drowsiness and the feasibility of implementing a driver drowsiness detection system based on hybrid sensing using non-contact sensors.


2021 ◽  
Author(s):  
Julio C. S. Reis ◽  
Fabrício Benevenuto

Digital platforms, including social media systems and messaging applications, have become a place for campaigns of misinformation that affect the credibility of the entire news ecosystem. The emergence of fake news in these environments has quickly evolved into a worldwide phenomenon, where the lack of scalable fact-checking strategies is especially worrisome. In this context, this thesis aim at investigating practical approaches for the automatic detection of fake news disseminated in digital platforms. Particularly, we explore new datasets and features for fake news detection to assess the prediction performance of current supervised machine learning approaches. We also propose an unbiased framework for quantifying the informativeness of features for fake news detection, and present an explanation of factors contributing to model decisions considering data from different scenarios. Finally, we propose and implement a new mechanism that accounts for the potential occurrence of fake news within the data, significantly reducing the number of content pieces journalists and fact-checkers have to go through before finding a fake story.


AI Magazine ◽  
2019 ◽  
Vol 40 (2) ◽  
pp. 44-58 ◽  
Author(s):  
David Gunning ◽  
David Aha

Dramatic success in machine learning has led to a new wave of AI applications (for example, transportation, security, medicine, finance, defense) that offer tremendous benefits but cannot explain their decisions and actions to human users. DARPA’s explainable artificial intelligence (XAI) program endeavors to create AI systems whose learned models and decisions can be understood and appropriately trusted by end users. Realizing this goal requires methods for learning more explainable models, designing effective explanation interfaces, and understanding the psychologic requirements for effective explanations. The XAI developer teams are addressing the first two challenges by creating ML techniques and developing principles, strategies, and human-computer interaction techniques for generating effective explanations. Another XAI team is addressing the third challenge by summarizing, extending, and applying psychologic theories of explanation to help the XAI evaluator define a suitable evaluation framework, which the developer teams will use to test their systems. The XAI teams completed the first of this 4-year program in May 2018. In a series of ongoing evaluations, the developer teams are assessing how well their XAM systems’ explanations improve user understanding, user trust, and user task performance.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
William Godel ◽  
Zeve Sanderson ◽  
Kevin Aslett ◽  
Jonathan Nagler ◽  
Richard Bonneau ◽  
...  

Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.


2017 ◽  
Vol 24 (14) ◽  
pp. 2012-2020 ◽  
Author(s):  
Akira Yasumura ◽  
Mikimasa Omori ◽  
Ayako Fukuda ◽  
Junichi Takahashi ◽  
Yukiko Yasumura ◽  
...  

Objective: To establish valid, objective biomarkers for ADHD using machine learning. Method: Machine learning was used to predict disorder severity from new brain function data, using a support vector machine (SVM). A multicenter approach was used to collect data for machine learning training, including behavioral and physiological indicators, age, and reverse Stroop task (RST) data from 108 children with ADHD and 108 typically developing (TD) children. Near-infrared spectroscopy (NIRS) was used to quantify change in prefrontal cortex oxygenated hemoglobin during RST. Verification data were from 62 children with ADHD and 37 TD children from six facilities in Japan. Results: The SVM general performance results showed sensitivity of 88.71%, specificity of 83.78%, and an overall discrimination rate of 86.25%. Conclusion: A SVM using an objective index from RST may be useful as an auxiliary biomarker for diagnosis for children with ADHD.


2019 ◽  
Author(s):  
Pablo De Andrades Lima ◽  
Jean Lucas Cimirro ◽  
Erico Amaral ◽  
Gerson Munhos

O trabalho apresenta uma revisão literária para buscar, analisar e verificar as formas de disseminação e as tecnologias usadas no combate as notícias falsas. Foram incluídos no estudo, artigos de pesquisa brasileiros sobre o tema, que resultaram na investigação de dois problemas: O uso de robôs sociais/digitais e influência do filtro bolha no direcionamento dos acessos e como parte da solução do problema a verificação das fakes news com uso de deep learning. As pesquisas citadas mostram um tendência promissora de evolução no combate às fake news utilizando o padrão automatizado de machine learning e inteligência humana para diferenciar robôs de pessoas além de outras ações no combate a fake news como fact checking, o uso de leis punitivas e o delicado conflito entre o combate às fake news e o respeito e a liberdade de expressão, onde planos legais nacionais, sob a justificativa de combater notícias falsa possam violar esta autonomia. Após a análise dos artigos enquadrados no escopo da pesquisa, pode-se verificar a dificuldade no tratamento de notícias falsas nos meios digitais atuais como redes sociais devidas a complexidade na classificação das notícias e interesses mercadológicos que utilizam ferramentas de disseminação de conteúdo robotizados. Muito embora hajam técnicas avançadas de inteligência artificial capazes de reduzir, ainda não existem ferramentas totalmente funcionais e práticas no combate às fake news, sendo ainda a conscientização dos usuários a melhor forma de prevenção.


Sign in / Sign up

Export Citation Format

Share Document