scholarly journals Quantum-Like Interdependence Theory Advances Autonomous Human–Machine Teams (A-HMTs)

Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1227
Author(s):  
William F. Lawless

As humanity grapples with the concept of autonomy for human–machine teams (A-HMTs), unresolved is the necessity for the control of autonomy that instills trust. For non-autonomous systems in states with a high degree of certainty, rational approaches exist to solve, model or control stable interactions; e.g., game theory, scale-free network theory, multi-agent systems, drone swarms. As an example, guided by artificial intelligence (AI, including machine learning, ML) or by human operators, swarms of drones have made spectacular gains in applications too numerous to list (e.g., crop management; mapping, surveillance and fire-fighting systems; weapon systems). But under states of uncertainty or where conflict exists, rational models fail, exactly where interdependence theory thrives. Large, coupled physical or information systems can also experience synergism or dysergism from interdependence. Synergistically, the best human teams are not only highly interdependent, but they also exploit interdependence to reduce uncertainty, the focus of this work-in-progress and roadmap. We have long argued that interdependence is fundamental to human autonomy in teams. But for A-HMTs, no mathematics exists to build from rational theory or social science for their design nor safe or effective operation, a severe weakness. Compared to the rational and traditional social theory, we hope to advance interdependence theory first by mapping similarities between quantum theory and our prior findings; e.g., to maintain interdependence, we previously established that boundaries reduce dysergic effects to allow teams to function (akin to blocking interference to prevent quantum decoherence). Second, we extend our prior findings with case studies to predict with interdependence theory that as uncertainty increases in non-factorable situations for humans, the duality in two-sided beliefs serves debaters who explore alternatives with tradeoffs in the search for the best path going forward. Third, applied to autonomous teams, we conclude that a machine in an A-HMT must be able to express itself to its human teammates in causal language however imperfectly.

2021 ◽  
Vol 4 ◽  
Author(s):  
Jonas Lundberg ◽  
Mattias Arvola ◽  
Karljohan Lundin Palmerius

The roles of human operators are changing due to increased intelligence and autonomy of computer systems. Humans will interact with systems at a more overarching level or only in specific situations. This involves learning new practices and changing habitual ways of thinking and acting, including reconsidering human autonomy in relation to autonomous systems. This paper describes a design case of a future autonomous management system for drone traffic in cities in a key scenario we call The Computer in Brussels. Our approach to designing for human collaboration with autonomous systems builds on scenario-based design and cognitive work analysis facilitated by computer simulations. We use a temporal method, called the Joint Control Framework to describe human and automated work in an abstraction hierarchy labeled Levels of Autonomy in Cognitive Control. We use the Score notation to analyze patterns of temporal developments that span levels of the abstraction hierarchy and discuss implications for human-automation communication in traffic management. We discuss how autonomy at a lower level can prevent autonomy on higher levels, and vice versa. We also discuss the temporal nature of autonomy in minute-to-minute operative work. Our conclusion is that human autonomy in relation to autonomous systems is based on fundamental trade-offs between technological opportunities to automate and values of what human actors find meaningful.


Author(s):  
Mica R. Endsley

As autonomous and semiautonomous systems are developed for automotive, aviation, cyber, robotics and other applications, the ability of human operators to effectively oversee and interact with them when needed poses a significant challenge. An automation conundrum exists in which as more autonomy is added to a system, and its reliability and robustness increase, the lower the situation awareness of human operators and the less likely that they will be able to take over manual control when needed. The human–autonomy systems oversight model integrates several decades of relevant autonomy research on operator situation awareness, out-of-the-loop performance problems, monitoring, and trust, which are all major challenges underlying the automation conundrum. Key design interventions for improving human performance in interacting with autonomous systems are integrated in the model, including human–automation interface features and central automation interaction paradigms comprising levels of automation, adaptive automation, and granularity of control approaches. Recommendations for the design of human–autonomy interfaces are presented and directions for future research discussed.


2019 ◽  
Vol 12 (1) ◽  
pp. 77-87
Author(s):  
György Kovács ◽  
Rabab Benotsmane ◽  
László Dudás

Recent tendencies – such as the life-cycles of products are shorter while consumers require more complex and more unique final products – poses many challenges to the production. The industrial sector is going through a paradigm shift. The traditional centrally controlled production processes will be replaced by decentralized control, which is built on the self-regulating ability of intelligent machines, products and workpieces that communicate with each other continuously. This new paradigm known as Industry 4.0. This conception is the introduction of digital network-linked intelligent systems, in which machines and products will communicate to one another in order to establish smart factories in which self-regulating production will be established. In this article, at first the essence, main goals and basic elements of Industry 4.0 conception is described. After it the autonomous systems are introduced which are based on multi agent systems. These systems include the collaborating robots via artificial intelligence which is an essential element of Industry 4.0.


Author(s):  
Anthony Merle ◽  
P. F. Ehlers

Pipeline stress-corrosion cracking (SCC) is an ongoing integrity concern for pipeline operators. A number of different strategies are currently employed to locate and mitigate SCC. Ultrasonic in-line inspection tools have proven capable of locating SCC, but reliability of these tools in gas pipelines remains in question. Rotating hydrotest programs are effectively employed by some companies but may not provide useful information as to the location of SCC along the pipeline. NACE Standard RP0204-2004 (SCC Direct Assessment Methodology) outlines factors to consider and methodologies to employ to predict where SCC is likely to occur, but even this document acknowledges that there are no well-established methods for predicting the presence of SCC with a high degree of certainty. Predictive modelling attempts to date have focused on establishing quantitative relationships between environmental factors and SCC formation and growth; these models have achieved varying degrees of success. A statistical approach to SCC predictive modelling has been developed. In contrast to previous models that attempted to determine direct correlations between environmental parameters and SCC, the new model statistically analyzed data from dig sites where SCC was and was not found. Regression techniques were used to create a multi-variable logistic regression model. The model was applied to the entire pipeline and verification digs were performed. The dig results indicated that the model was able to predict locations of SCC along the pipeline.


2008 ◽  
Vol 33 ◽  
pp. 551-574 ◽  
Author(s):  
S. De Jong ◽  
S. Uyttendaele ◽  
K. Tuyls

It is well-known that acting in an individually rational manner, according to the principles of classical game theory, may lead to sub-optimal solutions in a class of problems named social dilemmas. In contrast, humans generally do not have much difficulty with social dilemmas, as they are able to balance personal benefit and group benefit. As agents in multi-agent systems are regularly confronted with social dilemmas, for instance in tasks such as resource allocation, these agents may benefit from the inclusion of mechanisms thought to facilitate human fairness. Although many of such mechanisms have already been implemented in a multi-agent systems context, their application is usually limited to rather abstract social dilemmas with a discrete set of available strategies (usually two). Given that many real-world examples of social dilemmas are actually continuous in nature, we extend this previous work to more general dilemmas, in which agents operate in a continuous strategy space. The social dilemma under study here is the well-known Ultimatum Game, in which an optimal solution is achieved if agents agree on a common strategy. We investigate whether a scale-free interaction network facilitates agents to reach agreement, especially in the presence of fixed-strategy agents that represent a desired (e.g. human) outcome. Moreover, we study the influence of rewiring in the interaction network. The agents are equipped with continuous-action learning automata and play a large number of random pairwise games in order to establish a common strategy. From our experiments, we may conclude that results obtained in discrete-strategy games can be generalized to continuous-strategy games to a certain extent: a scale-free interaction network structure allows agents to achieve agreement on a common strategy, and rewiring in the interaction network greatly enhances the agents' ability to reach agreement. However, it also becomes clear that some alternative mechanisms, such as reputation and volunteering, have many subtleties involved and do not have convincing beneficial effects in the continuous case.


2021 ◽  
Author(s):  
Yuhu Qiu ◽  
Tianyang Lyu ◽  
Xizhe Zhang ◽  
Ruozhou Wang

Network decrease caused by the removal of nodes is an important evolution process that is paralleled with network growth. However, many complex network models usually lacked a sound decrease mechanism. Thus, they failed to capture how to cope with decreases in real life. The paper proposed decrease mechanisms for three typical types of networks, including the ER networks, the WS small-world networks and the BA scale-free networks. The proposed mechanisms maintained their key features in continuous and independent decrease processes, such as the random connections of ER networks, the long-range connections based on nearest-coupled network of WS networks and the tendency connections and the scale-free feature of BA networks. Experimental results showed that these mechanisms also maintained other topology characteristics including the degree distribution, clustering coefficient, average length of shortest-paths and diameter during decreases. Our studies also showed that it was quite difficult to find an efficient decrease mechanism for BA networks to withstand the continuous attacks at the high-degree nodes, because of the unequal status of nodes.


Author(s):  
Anthony L. Baker ◽  
Sean M. Fitzhugh ◽  
Daniel E. Forster ◽  
Kristin E. Schaefer

The development of more effective human-autonomy teaming (HAT) will depend on the availability of validated measures of their performance. Communication provides a critical window into a team’s interactions, states, and performance, but much remains to be learned about how to successfully carry over communication measures from the human teaming context to the HAT context. Therefore, the purpose of this paper is to discuss the implementation of three communication assessment methodologies used for two Wingman Joint Capabilities Technology Demonstration field experiments. These field experiments involved Soldiers and Marines maneuvering vehicles and engaging in live-fire target gunnery, all with the assistance of intelligent autonomous systems. Crew communication data were analyzed using aggregate communication flow, relational event models, and linguistic similarity. We discuss how the assessments were implemented, what they revealed about the teaming between humans and autonomy, and lessons learned for future implementation of communication measurement approaches in the HAT context.


Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


2021 ◽  
pp. 259-272
Author(s):  
Austin Wyatt ◽  
Jai Galliott

While the Conference on Certain Conventional Weapons (CCW)-sponsored process has steadily slowed, and occasionally stalled, over the past five years, the pace of technological development in both the civilian and military spheres has accelerated. In response, this chapter suggests the development of a normative framework that would establish common procedures and de-escalation channels between states within a given regional security cooperative prior to the demonstration point of truly autonomous weapon systems. Modeling itself on the Guidelines for Air Military Encounters and Guidelines Maritime Interaction, which were recently adopted by the Association of Southeast Asian Nations, the goal of this approach is to limit the destabilizing and escalatory potential of autonomous systems, which are expected to lower barriers to conflict and encourage brinkmanship while being difficult to definitively attribute. Overall, this chapter focuses on evaluating potential alternatives avenues to the CCW-sponsored process by which ethical, moral, and legal concerns raised by the emergence of autonomous weapon systems could be addressed.


Sign in / Sign up

Export Citation Format

Share Document