scholarly journals The Trust in Voting Systems (TVS) Measure

2022 ◽  
Vol 18 (1) ◽  
pp. 0-0

It is essential to democracy that voters trust voting systems enough to participate in elections and use these systems. Unfortunately, voter trust has been found to be low in many situations, which could detrimentally impact human-computer interactions in voting. Therefore, it is important to understand the degree to which voters trust any specific voting method. Voting researchers have developed and used measures of overall trust in technology; yet researchers have long argued that trust in systems is domain-specific, implying that system-specific measures should be used instead. To address this latter point, this paper describes the development of a psychometrically reliable and validated instrument called the Trust in Voting Systems (TVS) measure. The TVS not only allows researchers to understand group mean differences in trust across voting systems; it also allows researchers to understand individual differences in trust within systems—all of which collectively serves to inform and improve voting systems.

2019 ◽  
Vol 7 (3) ◽  
pp. 919-925
Author(s):  
Kelvin Wambani Siovi ◽  
Cheruiyot Willison Kipruto ◽  
Agnes Mindila

2021 ◽  
Vol 25 (4) ◽  
pp. 1031-1045
Author(s):  
Helang Lai ◽  
Keke Wu ◽  
Lingli Li

Emotion recognition in conversations is crucial as there is an urgent need to improve the overall experience of human-computer interactions. A promising improvement in this field is to develop a model that can effectively extract adequate contexts of a test utterance. We introduce a novel model, termed hierarchical memory networks (HMN), to address the issues of recognizing utterance level emotions. HMN divides the contexts into different aspects and employs different step lengths to represent the weights of these aspects. To model the self dependencies, HMN takes independent local memory networks to model these aspects. Further, to capture the interpersonal dependencies, HMN employs global memory networks to integrate the local outputs into global storages. Such storages can generate contextual summaries and help to find the emotional dependent utterance that is most relevant to the test utterance. With an attention-based multi-hops scheme, these storages are then merged with the test utterance using an addition operation in the iterations. Experiments on the IEMOCAP dataset show our model outperforms the compared methods with accuracy improvement.


2021 ◽  
Vol 5 (3) ◽  
pp. 13
Author(s):  
Heting Wang ◽  
Vidya Gaddy ◽  
James Ross Beveridge ◽  
Francisco R. Ortega

The role of affect has been long studied in human–computer interactions. Unlike previous studies that focused on seven basic emotions, an avatar named Diana was introduced who expresses a higher level of emotional intelligence. To adapt to the users various affects during interaction, Diana simulates emotions with dynamic facial expressions. When two people collaborated to build blocks, their affects were recognized and labeled using the Affdex SDK and a descriptive analysis was provided. When participants turned to collaborate with Diana, their subjective responses were collected and the length of completion was recorded. Three modes of Diana were involved: a flat-faced Diana, a Diana that used mimicry facial expressions, and a Diana that used emotionally responsive facial expressions. Twenty-one responses were collected through a five-point Likert scale questionnaire and the NASA TLX. Results from questionnaires were not statistically different. However, the emotionally responsive Diana obtained more positive responses, and people spent the longest time with the mimicry Diana. In post-study comments, most participants perceived facial expressions on Diana’s face as natural, four mentioned uncomfortable feelings caused by the Uncanny Valley effect.


2010 ◽  
pp. 15-33 ◽  
Author(s):  
Helen Klein ◽  
Katherine Lippa ◽  
Mei-Hua Lin

Sign in / Sign up

Export Citation Format

Share Document