scholarly journals Deciding, ‘What Happened?’ When We Don’t Really Know: Finding Theoretical Grounding for Legitimate Judicial Fact-Finding

2020 ◽  
Vol 33 (1) ◽  
pp. 1-29
Author(s):  
Nayha Acharya

The crucial question for many legal disputes is “what happened,”? and there is often no easy answer. Fact-finding is an uncertain endeavor and risk of inaccuracy is inevitable. As such, I ask, on what basis can we accept the legitimacy of judicial fact-findings. I conclude that acceptable factual determinations depend on adherence to a legitimate process of fact-finding. Adopting Jürgen Habermas’s insights, I offer a theoretical grounding for the acceptability of judicial fact-finding. The theory holds that legal processes must embody respect for legal subjects as equal and autonomous agents. This necessitates two procedural features. First, fact-finding processes must be factually reliable. This requires: (a) relevant evidence is admissible and exclusions are justified based on respecting human autonomy; (b) error-risk management is internally coherent and consistent; (c) the standard of proof is, at minimum, a balance of probabilities; (d) evidence is used rationally. Second, fact-finding processes must ensure fulsome participation rights. This project is justificatory—civil justice systems are imperfect, but there are attainable conditions that make them good, which must never be compromised.

Author(s):  
Ruikun Luo ◽  
Na Du ◽  
Kevin Y. Huang ◽  
X. Jessie Yang

Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.


Author(s):  
Thomas O’Neill ◽  
Nathan McNeese ◽  
Amy Barron ◽  
Beau Schelble

Objective We define human–autonomy teaming and offer a synthesis of the existing empirical research on the topic. Specifically, we identify the research environments, dependent variables, themes representing the key findings, and critical future research directions. Background Whereas a burgeoning literature on high-performance teamwork identifies the factors critical to success, much less is known about how human–autonomy teams (HATs) achieve success. Human–autonomy teamwork involves humans working interdependently toward a common goal along with autonomous agents. Autonomous agents involve a degree of self-government and self-directed behavior (agency), and autonomous agents take on a unique role or set of tasks and work interdependently with human team members to achieve a shared objective. Method We searched the literature on human–autonomy teaming. To meet our criteria for inclusion, the paper needed to involve empirical research and meet our definition of human–autonomy teaming. We found 76 articles that met our criteria for inclusion. Results We report on research environments and we find that the key independent variables involve autonomous agent characteristics, team composition, task characteristics, human individual differences, training, and communication. We identify themes for each of these and discuss the future research needs. Conclusion There are areas where research findings are clear and consistent, but there are many opportunities for future research. Particularly important will be research that identifies mechanisms linking team input to team output variables.


Author(s):  
Andrew Ligertwood

The presentation of expert forensic science evidence in rigorous statistical terms raises the question of how lay fact-finders (judges and jurors) might employ such evidence to prove events in issue. Can this simply be left to the common sense of fact-finders or should the law provide further guidance about how they should reason in applying the criminal standard of proof? Should courts demand that witnesses who give statistical evidence express that evidence in a particular form? This article examines the non-mathematical nature of common law fact-finding and its embodiment in the presumption of innocence principle underlying the criminal standard of proof. It argues that forensic scientists present evidence in a form that makes transparent the risks of error so that, in determining satisfaction of the accused’s guilt having regard to all the evidence before it, the fact-finder considers the reasonable possibility of doubts necessarily left open by statistical evidence.


Author(s):  
Valentyna I. Borysova ◽  
Bohdan P. Karnaukh

As a result of recent amendments to the procedural legislation of Ukraine, one may observe a tendency in judicial practice to differentiate the standards of proof depending on the type of litigation. Thus, in commercial litigation the so-called standard of “probability of evidence” applies, while in criminal proceedings – “beyond a reasonable doubt” standard applies. The purpose of this study was to find the rational justification for the differentiation of the standards of proof applied in civil (commercial) and criminal cases and to explain how the same fact is considered proven for the purposes of civil lawsuit and not proven for the purposes of criminal charge. The study is based on the methodology of Bayesian decision theory. The paper demonstrated how the principles of Bayesian decision theory can be applied to judicial fact-finding. According to Bayesian theory, the standard of proof applied depends on the ratio of the false positive error disutility to false negative error disutility. Since both types of error have the same disutility in a civil litigation, the threshold value of conviction is 50+ percent. In a criminal case, on the other hand, the disutility of false positive error considerably exceeds the disutility of the false negative one, and therefore the threshold value of conviction shall be much higher, amounting to 90 percent. Bayesian decision theory is premised on probabilistic assessments. And since the concept of probability has many meanings, the results of the application of Bayesian theory to judicial fact-finding can be interpreted in a variety of ways. When dealing with statistical evidence, it is crucial to distinguish between subjective and objective probability. Statistics indicate objective probability, while the standard of proof refers to subjective probability. Yet, in some cases, especially when statistical data is the only available evidence, the subjective probability may be roughly equivalent to the objective probability. In such cases, statistics cannot be ignored


2015 ◽  
Vol 1 (4) ◽  
pp. 257-265 ◽  
Author(s):  
Zhiru Wang ◽  
Guofeng Su ◽  
Martin Skitmore ◽  
Jianguo Chen ◽  
Albert P. C. Chan ◽  
...  

2021 ◽  
Vol 4 ◽  
Author(s):  
Ryo Nakahashi ◽  
Seiji Yamada

The human-agent team, which is a problem in which humans and autonomous agents collaborate to achieve one task, is typical in human-AI collaboration. For effective collaboration, humans want to have an effective plan, but in realistic situations, they might have difficulty calculating the best plan due to cognitive limitations. In this case, guidance from an agent that has many computational resources may be useful. However, if an agent guides the human behavior explicitly, the human may feel that they have lost autonomy and are being controlled by the agent. We therefore investigated implicit guidance offered by means of an agent’s behavior. With this type of guidance, the agent acts in a way that makes it easy for the human to find an effective plan for a collaborative task, and the human can then improve the plan. Since the human improves their plan voluntarily, he or she maintains autonomy. We modeled a collaborative agent with implicit guidance by integrating the Bayesian Theory of Mind into existing collaborative-planning algorithms and demonstrated through a behavioral experiment that implicit guidance is effective for enabling humans to maintain a balance between improving their plans and retaining autonomy.


2015 ◽  
Vol 60 (2) ◽  
pp. 173-214
Author(s):  
Kenneth M. Ehrenberg

In his 1827 work Rationale of Judicial Evidence, Jeremy Bentham famously argued against exclusionary rules such as hearsay, preferring a policy of “universal admissibility” unless the declarant is easily available. Bentham’s claim that all relevant evidence should be considered with appropriate instructions to fact finders has been particularly influential among judges, culminating in the “principled approach” to hearsay in Canada articulated in R. v. Khelawon. Furthermore, many scholars attack Bentham’s argument only for ignoring the realities of juror bias, admitting universal admissibility would be the best policy for an ideal jury. This article uses the theory of epistemic contextualism to justify the exclusion of otherwise relevant evidence, and even reliable hearsay, on the basis of preventing shifts in the epistemic context. Epistemic contextualism holds that the justification standards of knowledge attributions change according to the contexts in which the attributions are made. Hearsay and other kinds of information the assessment of which rely upon fact finders’ more common epistemic capabilities push the epistemic context of the trial toward one of more relaxed epistemic standards. The exclusion of hearsay helps to maintain a relatively high standards context hitched to the standard of proof for the case and to prevent shifts that threaten to try defendants with inconsistent standards.


Sign in / Sign up

Export Citation Format

Share Document