human judge
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 2)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
André Dao

Abstract This article examines the requirements of the right to a fair trial in the context of the use of machine-learning algorithms (MLAs) in judicial proceedings, with a focus on a core component of this right, the right to be heard. Though NGOs and scholars have begun to note that the right to a fair trial may be the best framework to address the challenges raised by MLAs, the actual requirements of the right in this novel context are underdeveloped. This article evaluates two normative approaches to filling this gap. The first approach, the argument from fairness, produces three broad categories of measures for ensuring fairness: measures for increasing the transparency and accountability of MLAs, measures for ensuring the participation of litigants, and measures for securing the impartiality of the human judge. However, this article argues that the argument from fairness cannot provide the necessary normative grounding for the right to a fair trial in the context of MLAs, as it collapses into the concept of ‘algorithmic fairness’. The second approach is based on the concept of human dignity as a status. The primary argument of this article is that the concept of human dignity as a status can provide better normative grounding for the right to a fair trial because it offers an account of human personhood that resists the de-humanization of data subjectification. That richer account of human personhood allows us to think of the trial not only as a vehicle for accurate outcomes, but also as a forum for the expression of human dignity.


2020 ◽  
Vol 11 (2) ◽  
Author(s):  
Jasper Ulenaers

AbstractThis paper seeks to examine the potential influences AI may have on the right to a fair trial when it is used in the courtroom. Essentially, AI systems can assume two roles in the courtroom. On the one hand, “AI assistants” can support judges in their decision-making process by predicting and preparing judicial decisions; on the other hand, “robot judges” can replace human judges and decide cases autonomously in fully automated court proceedings. Both roles will be tested against the requirements of the right to a fair trial as protected by Article 6 ECHR.An important element in this test is the role that a human judge plays in legal proceedings. As the justice system is a social process, the AI assistant is preferred to a situation in which a robot judge would completely replace human judges. Based on extensive literature, various examples and case studies, this paper concludes that the use of AI assistants can better serve legitimacy and guarantee a fair trial.


2013 ◽  
Vol 859 ◽  
pp. 586-590
Author(s):  
Ji Hong Yan ◽  
Wen Rong Jiang

This paper aims to find an effective solution to detect fraudulent clicks from commissioners’ click logs. Given user’s click information of ads, we want to predict the user’s fraudulent label – malicious or not. Based on training set, we build our classification models for fraudulent click detection. We first create and extract features for the raw log data we show above. Next, we choose models by evaluating classifiers, and then build an ensemble model. Finally, we applied our model on the real dataset for human judge reference.


Author(s):  
Huma Shah ◽  
Kevin Warwick

The Turing Test, originally configured as a game for a human to distinguish between an unseen and unheard man and woman, through a text-based conversational measure of gender, is the ultimate test for deception and hence, thinking. So conceived Alan Turing when he introduced a machine into the game. His idea, that once a machine deceives a human judge into believing that they are the human, then that machine should be attributed with intelligence. What Turing missed is the presence of emotion in human dialogue, without expression of which, an entity could appear non-human. Indeed, humans have been confused as machine-like, the confederate effect, during instantiations of the Turing Test staged in Loebner Prizes for Artificial Intelligence. We present results from recent Loebner Prizes and two parallel conversations from the 2006 contest in which two human judges, both native English speakers, each concomitantly interacted with a non-native English speaking hidden-human, and jabberwacky, the 2005 and 2006 Loebner Prize bronze prize winner for most human-like machine. We find that machines in those contests appear conversationally worse than non-native hidden-humans, and, as a consequence attract a downward trend in highest scores awarded to them by human judges in the 2004, 2005 and 2006 Loebner Prizes. Analysing Loebner 2006 conversations, we see that a parallel could be drawn with autistics: the machine was able to broadcast but it did not inform; it talked but it did not emote. The hidden-humans were easily identified through their emotional intelligence, ability to discern emotional state of others and contribute with their own ‘balloons of textual emotion’.


Author(s):  
Ellen J. Bass ◽  
Amy R. Pritchett

Human-automated Judgment Learning (HAJL) is a methodology for investigating human-automated judgment system interaction capturing the judgment processes of the human and automated judge, features of the task environment, and relationships between them. HAJL provides measures for conflict between the judges, compromise by the human judge, adaptation of the human judge to the automated one, and for assessing how well the human judge understands the automated one. HAJL was empirically tested using a simplified air traffic conflict prediction task. Two between-subjects manipulations were crossed to investigate HAJL's sensitivity to training and design interventions. Statistically significant differences were found with respect to 1) males outperforming females judgment performance before feedback from the automated judge was available while the judge's subsequent output eliminated this difference; 2) participants tended to compromise with the automated judge over time. HAJL also identified a trend for participants with higher judgment achievement to predict better the automated judgment and thought that their own judgments were closer to the automated judge than they were.


1997 ◽  
Vol 40 (5) ◽  
pp. 1085-1096 ◽  
Author(s):  
Peter Howell ◽  
Stevie Sackin ◽  
Kazan Glenn

This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the preceding article focus on developing and testing recognizers for repetitions and prolongations in stuttered speech. The automatic recognizers classify the speech in two stages: In the first the speech is segmented and in the second the segments are categorized. The units segmented are words. The current article describes results for an automatic recognizer intended to classify words as fluent or containing a repetition or prolongation in a text read by children who stutter that contained the three types of words alone. Word segmentations are supplied and the classifier is an artificial neural network (ANN). Classification performance was assessed on material that was not used for training. Correct performance occurred when the ANN placed a word into the same category as the human judge whose material was used to train the ANNs. The best ANN correctly classified 95% of fluent, and 78% of dysfluent words in the test material.


Sign in / Sign up

Export Citation Format

Share Document