Likelihood Ratio in Evidential Evaluation of Polygraph Examination

2019 ◽  
Vol 11 (1) ◽  
pp. 95-112
Author(s):  
Jerzy Konieczny ◽  
Paulina Wolańska-Nowak

The starting point of the paper is the observation that the likelihood ratio (LR) is not used in the evaluation practice of — so important in the field of internal security — polygraph examinations. Meanwhile, LR is the only scientifically justifiable parameter that shows the evidential weight of particular evidence. The authors present theoretical attempts to use LR for evidential assessment of the polygraph examinations value and subject them to criticism. The main objective of the paper is to present the LR calculation procedure in the context of interpretation of a polygraph examination result treated as evaluative expertise. The following assumptions are made: the analysis includes only comparison question techniques; examination results enable to include a relevant subject only in one of the three categories: deception indicated, no deception indicated, inconclusive; there are various ways to assign LR; in the course of LR assignment, the arbitrary adoption of the values of some variables is admissible. Several examples of LR calculations are presented in different tactical configurations of polygraph examinations. The significance of including the inconclusive results in the examination technique characteristics is analysed. The possibility of applying the cumulative LR is indicated, however, leaving this question open. Consequences of the LR application in the interpretation of polygraph examinations are also presented as an argument in the criminal analysis. Conclusions show that treating polygraph examinations as evaluative expertise opens a new perspective on this method of forensic identification and deserves to be continued; however, the issue of the evidential use of polygraph examination results, in the light of the evaluation made with the use of the Bayesian approach, requires a number of further discussions among lawyers and scientists.

Author(s):  
T. Aven ◽  
A. Hjorteland

In this paper we discuss how to implement a Bayesian thinking for multistate reliability analysis. The Bayesian paradigm comprises a unified and consistent framework for analysing and expressing reliability, but in our view the standard Bayesian procedures gives too much emphasis on probability models and inference on fictional parameters. We believe that there is a need for a rethinking on how to implement the Bayesian approach, and in this paper we present and discuss such a rethinking for multistate reliability analysis. The starting point of the analysis should be observable quantities, expressing states of the world, not fictional parameters.


2019 ◽  
Vol specjalny (XIX) ◽  
pp. 123-137
Author(s):  
Jerzy Konieczny

The aim of the article is to present the role of justification and belief in the course of proving guilt in a criminal trial. The starting point is the indication of the inductive character of evidentiary reasoning and the acceptance of its conclusions on the basis of the decision making by trial authority. These decisions appear after the process in which this authority reaches the level of aspirations to make them; the second basis may be their expected usefulness. The requirements for proof are contrasted with the concept of knowledge. If one assumes that the attribution of knowledge to a particular subject consists in the possession of a justified, accurate belief by that subject, then one can assume that the possession of such knowledge is tantamount to proving in a trial sense. The tools supporting the pursuit of correctness of command are the Shafer-Dempster belief function and the Bayesian approach in making decisions about factual findings.


2021 ◽  
Vol 14 (2) ◽  
pp. 231-232
Author(s):  
Adnan Kastrati ◽  
Alexander Hapfelmeier

Author(s):  
Daiane Aparecida Zuanetti ◽  
Luis Aparecido Milan

In this paper, we propose a new Bayesian approach for QTL mapping of family data. The main purpose is to model a phenotype as a function of QTLs’ effects. The model considers the detailed familiar dependence and it does not rely on random effects. It combines the probability for Mendelian inheritance of parents’ genotype and the correlation between flanking markers and QTLs. This is an advance when compared with models which use only Mendelian segregation or only the correlation between markers and QTLs to estimate transmission probabilities. We use the Bayesian approach to estimate the number of QTLs, their location and the additive and dominance effects. We compare the performance of the proposed method with variance component and LASSO models using simulated and GAW17 data sets. Under tested conditions, the proposed method outperforms other methods in aspects such as estimating the number of QTLs, the accuracy of the QTLs’ position and the estimate of their effects. The results of the application of the proposed method to data sets exceeded all of our expectations.


Sign in / Sign up

Export Citation Format

Share Document