scholarly journals Optimizing the selection of fillers in police lineups

2021 ◽  
Vol 118 (8) ◽  
pp. e2017292118
Author(s):  
Melissa F. Colloff ◽  
Brent M. Wilson ◽  
Travis M. Seale-Carlisle ◽  
John T. Wixted

A typical police lineup contains a photo of one suspect (who is innocent in a target-absent lineup and guilty in a target-present lineup) plus photos of five or more fillers who are known to be innocent. To create a fair lineup in which the suspect does not stand out, two filler selection methods are commonly used. In the first, fillers are selected if they are similar in appearance to the suspect. In the second, fillers are selected if they possess facial features included in the witness’s description of the culprit (e.g., “20-y-old white male”). The police sometimes use a combination of the two methods by selecting description-matched fillers whose appearance is also similar to that of the suspect in the lineup. Decades of research on which approach is better remains unsettled. Here, we tested a counterintuitive prediction made by a formal model based on signal detection theory: From a pool of acceptable description-matched photos, selecting fillers whose appearance is otherwise dissimilar to the suspect should increase the hit rate without affecting the false-alarm rate (increasing discriminability). In Experiment 1, we confirmed this prediction using a standard mock-crime paradigm. In Experiment 2, the effect on discriminability was reversed (as also predicted by the model) when fillers were matched on similarity to the perpetrator in both target-present and target-absent lineups. These findings suggest that signal-detection theory offers a useful theoretical framework for understanding eyewitness identification decisions made from a police lineup.

2015 ◽  
Vol 2 (1) ◽  
pp. 175-186 ◽  
Author(s):  
Steven E. Clark ◽  
Aaron S. Benjamin ◽  
John T. Wixted ◽  
Laura Mickes ◽  
Scott D. Gronlund

This article addresses the problem of eyewitness identification errors that can lead to false convictions of the innocent and false acquittals of the guilty. At the heart of our analysis based on signal detection theory is the separation of diagnostic accuracy—the ability to discriminate between those who are guilty versus those who are innocent—from the consideration of the relative costs associated with different kinds of errors. Application of this theory suggests that current recommendations for reforms have conflated diagnostic accuracy with the evaluation of costs in such a way as to reduce the accuracy of identification evidence and the accuracy of adjudicative outcomes. Our framework points to a revision in recommended procedures and a framework for policy analysis.


1995 ◽  
Vol 40 (10) ◽  
pp. 972-972
Author(s):  
Jerome R. Busemeyer

2003 ◽  
Author(s):  
Shawn C. Stafford ◽  
James L. Szalma ◽  
Peter A. Hancock ◽  
Mustapha Mouloua

Sign in / Sign up

Export Citation Format

Share Document