subjective trust
Recently Published Documents


TOTAL DOCUMENTS

54
(FIVE YEARS 9)

H-INDEX

8
(FIVE YEARS 1)

Author(s):  
Cyrus K. Foroughi ◽  
Shannon Devlin ◽  
Richard Pak ◽  
Noelle L. Brown ◽  
Ciara Sibley ◽  
...  

Objective Assess performance, trust, and visual attention during the monitoring of a near-perfect automated system. Background Research rarely attempts to assess performance, trust, and visual attention in near-perfect automated systems even though they will be relied on in high-stakes environments. Methods Seventy-three participants completed a 40-min supervisory control task where they monitored three search feeds. All search feeds were 100% reliable with the exception of two automation failures: one miss and one false alarm. Eye-tracking and subjective trust data were collected. Results Thirty-four percent of participants correctly identified the automation miss, and 67% correctly identified the automation false alarm. Subjective trust increased when participants did not detect the automation failures and decreased when they did. Participants who detected the false alarm had a more complex scan pattern in the 2 min centered around the automation failure compared with those who did not. Additionally, those who detected the failures had longer dwell times in and transitioned to the center sensor feed significantly more often. Conclusion Not only does this work highlight the limitations of the human when monitoring near-perfect automated systems, it begins to quantify the subjective experience and attentional cost of the human. It further emphasizes the need to (1) reevaluate the role of the operator in future high-stakes environments and (2) understand the human on an individual level and actively design for the given individual when working with near-perfect automated systems. Application Multiple operator-level measures should be collected in real-time in order to monitor an operator’s state and leverage real-time, individualized assistance.


2021 ◽  
Vol IX(254) (46) ◽  
pp. 65-70
Author(s):  
M. Naumova

The Internet successfully competes with television as a source of information on socio-political issues. The article analyzes the objective (digitalization) and subjective (trust, tastes, preferences, etc.) factors that contribute to the flow of the audience to online information resources. Consumer sensitivity to distorted, manipulative content and the practice of testing media messages for authenticity are considered.


2021 ◽  
Vol 125 ◽  
pp. 01005
Author(s):  
Zoya Dmitrievna Denikina ◽  
Pirmagomed Shikhmagomedovich Shikhgafizov ◽  
Aleksandr Valentinovich Sablukov ◽  
Vyacheslav Leonidovich Primakov ◽  
Valery Aleksandrovich Lapshov

The article examines the epistemological status of the phenomenon of social trust. The research purpose is to explicate the concept of trust in connection with the fundamental transformation of socio-historical practice and social knowledge. The priority methodological task is to study the problem within the framework of the system methodology evolution and consider the parameters of trust in the intervals of non-classical and post-non-classical systems analysis. The study is based on a philosophical-scientific paradigm approach. Epistemological situations of autonomous and conventional application of different paradigms are modeled in the study of the phenomenon of social trust. In non-classical systems analysis, social trust is an internal characteristic of a society-system in the mode of its correct functioning. Social trust is one of the social order mechanisms. Social trust is also associated with subjective trust, ordering the interaction of subjects. Intersubjective interactions entail the legitimation of the social order. The transformation of social trust in the modern world is associated with trends in the reduction of the role of normativity, the fragmentation of socio-historical existence. There appears augmented reality, which contains to a certain extent the elements of pseudo-being. Subjective social trust can no longer support holistic meanings. In a situation of change in epistemological attitudes to determine the specifics of social trust, a systems analysis of weakly and highly non-equilibrium states of the intersystem environment is promising. Social order formulas are still inadequate for modeling intersystem environmental states (relations between states, blocs, unions, etc.). Within the framework of the post-non-classical systemic methodology, social trust refers to the signs of social holism. Social trust contributes to the emergence of stable intersystem conditions, the formation of a regulatory environment, which becomes an acting unit. Social trust in all philosophical-scientific paradigms is indicative of the rationality of social-historical existence.


2021 ◽  
Vol 11 (1) ◽  
pp. 1-20
Author(s):  
Vijay Lingaraddi Hallappanavar ◽  
Mahantesh N. Birje

Due to the lack of trust on IoT devices, the integration of fog computing and IoT devices is hindered. Trust is considered to have two notions: subjective trust where the user puts his individual interests to the interactions and objective trust which depends only on individual interaction experiences. This paper proposes a reliable trust computing mechanism based on subjective and objective trust. The subjective trust is calculated from feedback of multiple sources. The incentive and punishment mechanism is applied to the subjective trust to avoid malicious devices. The objective trust is calculated based on quality of services. The overall trust helps the IoT devices to determine the trustworthiness of other IoT devices and in turn helps to establish a trusted environment. The experimental results show that the performance is better than existing methods in terms of time required to calculate the overall trust, reliability, and trustworthiness of IoT devices.


Author(s):  
William H Sharp ◽  
Marc M. Sebrechts

Computer agents are frequently anthropomorphized, giving them appearances and responses similar to humans. Research has demonstrated that users tend to apply social norms and expectations to such computer agents, and that people interact with computer agents in a similar fashion as they would another human. Perceived expertise has been shown to affect trust in human-human relationships, but the literature investigating how this influences trust in computer agents is limited. The current study investigated the effect of computer agent perceived level of expertise and recommendation reliability on subjective (rated) and objective (compliance) trust during a pattern recognition task. Reliability of agent recommendations had a strong effect on both subjective and objective trust. Expert agents started with higher subjective trust, but showed less trust repair. Agent expertise had little impact on objective trust resiliency or repair.


2020 ◽  
Vol 5 (2) ◽  
pp. 236-248 ◽  
Author(s):  
Hui Xia ◽  
Zhetao Li ◽  
Yuhui Zheng ◽  
Anfeng Liu ◽  
Young-June Choi ◽  
...  

Author(s):  
Hunter Rogers ◽  
Amro Khasawneh ◽  
Jeffrey Bertrand ◽  
Kapil Chalil

The use of automation is prevalent in almost every aspect of modern life, and since its inception researchers have been investigating trust in automation. There are many methods of measuring trust. Given that trust means different things to different people and by nature is subjective, most methods are subjective survey assessments (Freedy, DeVisser, Weltman, & Coeyman, 2007; Jian, Bisantz, & Drury, 2000). Many studies have investigated how the reliability of an automated agent or the level of automation changes subjective trust in the automation (Dixon & Wickens, 2006; Du, Zhang, & Yang, 2018; Khasawneh, Rogers, Bertrand, Madathil, & Gramopadhye, 2019; Rogers, Khasawneh, Bertrand, & Madathil, 2017).


Author(s):  
Yidu Lu ◽  
Nadine Sarter

Creating safe human-machine systems requires that operators can quickly notice changes in system reliability in the interest of trust calibration and proper automation usage. Operators’ readiness to trust a system is determined not only by the performance of the automation but also by their confidence in their own abilities. This study therefore compared the usefulness of feedback on the performance of either agent. The experiment required two groups of ten participants each to perform an automation-assisted target identification task with “Automation Performance Feedback” (APF) or “Operator Performance Feedback” (OPF). Four different scenarios differed with respect to the degree and duration of changes in system reliability. Findings indicate that APF was more effective for supporting timely adjustments of perceived system reliability, especially with large and long reliability changes. Subjective trust ratings and performance were not affected, however, suggesting that these two factors are closely linked and more relevant for automation reliance.


2018 ◽  
Author(s):  
F Ismagilova ◽  
E Boštjančič

Identifying and understanding the similarities and differences between subjective trust criteria in Russian and European business can help offer scientific recommendations for the development of long-term international business cooperation, based on mutual trust, despite the differences between both cultures. The purpose of this study is to describe the relations between the implicit and explicit trust criteria presented in Russian and European studies, and to compare how these criteria are expressed in Russian and European publications from a quantitative perspective. Using the content analysis of European and Russian publications for the period 2005–2015, the following main research questions are considered: (1) Are more references made to explicit trust criteria than implicit trust criteria? (2) Are the explicit criteria focused on a partner’s business mentioned more often than the explicit criteria focused on a partner’s competences and personality? (3) Are more references made to implicit trust criteria in Russian or European scientific articles? As the results reveal, although implicit criteria do not dominate in the subjective trust criteria for business partnerships,they nevertheless have a significant presence. Trust criteria based on characteristics related to business dominate the criteria for a partner’s competences and personality. The differences between explicit and implicit trust criteria in Russian and European publications are not statistically significant. Nevertheless, on average, the difference between implicit and explicit assumptions of confidence is 0.13 in Russian studies and 0.34 in European studies. The study revealed that small companies entering the international market should consider the risks associated with a failure to understand what a business partner considers an implicit sign for triggering business relations and trust. Keywords: trust in business relations, implicit and explicit trust criteria


Sign in / Sign up

Export Citation Format

Share Document