scholarly journals Who is the Usual Suspect? Evidence of a Selection Bias Toward Faces That Make Direct Eye Contact in a Lineup Task

i-Perception ◽  
2017 ◽  
Vol 8 (1) ◽  
pp. 204166951769041
Author(s):  
Jessica Taubert ◽  
Celine van Golde ◽  
Frans A. J. Verstraten

The speed and ease with which we recognize the faces of our friends and family members belies the difficulty we have recognizing less familiar individuals. Nonetheless, overconfidence in our ability to recognize faces has carried over into various aspects of our legal system; for instance, eyewitness identification serves a critical role in criminal proceedings. For this reason, understanding the perceptual and psychological processes that underlie false identification is of the utmost importance. Gaze direction is a salient social signal and direct eye contact, in particular, is thought to capture attention. Here, we tested the hypothesis that differences in gaze direction may influence difficult decisions in a lineup context. In a series of experiments, we show that when a group of faces differed in their gaze direction, the faces that were making eye contact with the participants were more likely to be misidentified. Interestingly, this bias disappeared when the faces are presented with their eyes closed. These findings open a critical conversation between social neuroscience and forensic psychology, and imply that direct eye contact may (wrongly) increase the perceived familiarity of a face.

2013 ◽  
Vol 3 (3) ◽  
pp. 147-173 ◽  
Author(s):  
Boris Stilman

Abstract We investigate structure of the Primary Language of the human brain as introduced by J. von Neumann in 1957. Two components have been investigated, the algorithm optimizing warfighting, Linguistic Geometry (LG), and the algorithm for inventing new algorithms, the Algorithm of Discovery. The latter is based on multiple thought experiments, which manifest themselves via mental visual streams (“mental movies”). There are Observation, Construction and Validation classes of streams. Several visual streams can run concurrently and exchange information between each other. The streams may initiate additional thought experiments, program them, and execute them in due course. The visual streams are focused employing the algorithm of “a child playing a construction set” that includes a visual model, a construction set, and the Ghost. Mosaic reasoning introduced in this paper is one of the major means to focusing visual streams in a desired direction. It uses analogy with an assembly of a picture of various colorful tiles, components of a construction set. In investigating role of mosaic reasoning in the Algorithm of Discovery, in this paper, I replay a series of four thought experiments related to the discovery of the structure of the molecule of DNA. Only the fourth experiment was successful. This series of experiments reveals how a sequence of failures eventually leads the Algorithm to a discovery. This series permits to expose the key components of the mosaic reasoning, tiles and aggregates, local and global matching rules, and unstructured environment. In particular, it reveals the aggregates and the rules that played critical role in the discovery of the structure of DNA. They include the generator and the plug-in aggregates, the transformation and complementarity matching rules, and the type of unstructured environment. For the first time, the Algorithm of Discovery has been applied to replaying discoveries not related to LG and even to mathematics


2012 ◽  
Vol 486 ◽  
pp. 8-11
Author(s):  
Yuan Yuan ◽  
Bao Min Sun ◽  
Xiao Tian Wang ◽  
Yang Wang ◽  
Yong Hong Guo

Catalysts play a critical role in the synthesis of carbon nanotubes. In this paper, we design a series of experiments to explore the impact of contents of Mo on the products. Analysis show, when the molar ratio of Fe: Mo: Al is 1: 0.2: 16, the carbon nanotubes show the best yields and quality.


2013 ◽  
Vol 79 (23) ◽  
pp. 7229-7233 ◽  
Author(s):  
Jiyeun Kate Kim ◽  
Na Hyang Kim ◽  
Ho Am Jang ◽  
Yoshitomo Kikuchi ◽  
Chan-Hee Kim ◽  
...  

ABSTRACTMany insects possess symbiotic bacteria that affect the biology of the host. The level of the symbiont population in the host is a pivotal factor that modulates the biological outcome of the symbiotic association. Hence, the symbiont population should be maintained at a proper level by the host's control mechanisms. Several mechanisms for controlling intracellular symbionts of insects have been reported, while mechanisms for controlling extracellular gut symbionts of insects are poorly understood. The bean bugRiptortus pedestrisharbors a betaproteobacterial extracellular symbiont of the genusBurkholderiain the midgut symbiotic organ designated the M4 region. We found that the M4B region, which is directly connected to the M4 region, also harborsBurkholderiasymbiont cells, but the symbionts therein are mostly dead. A series of experiments demonstrated that the M4B region exhibits antimicrobial activity, and the antimicrobial activity is specifically potent against theBurkholderiasymbiont but not the culturedBurkholderiaand other bacteria. The antimicrobial activity of the M4B region was detected in symbiotic host insects, reaching its highest point at the fifth instar, but not in aposymbiotic host insects, which suggests the possibility of symbiont-mediated induction of the antimicrobial activity. This antimicrobial activity was not associated with upregulation of antimicrobial peptides of the host. Based on these results, we propose that the M4B region is a specialized gut region ofR. pedestristhat plays a critical role in controlling the population of theBurkholderiagut symbiont. The molecular basis of the antimicrobial activity is of great interest and deserves future study.


2010 ◽  
Vol 33 (6) ◽  
pp. 458-459 ◽  
Author(s):  
Atsushi Senju ◽  
Mark H. Johnson

AbstractEye contact plays a critical role in many aspects of face processing, including the processing of smiles. We propose that this is achieved by a subcortical route, which is activated by eye contact and modulates the cortical areas involve in social cognition, including the processing of facial expression. This mechanism could be impaired in individuals with autism spectrum disorders.


2015 ◽  
Vol 48 (48) ◽  
pp. 9
Author(s):  
Elisabeth Engberg-Pedersen

Linguistic perspective can be used either to denote the way en event is described as seen from the perspective of one of the referents, or as a term for various linguistic means used to indicate whether a referent is new or given and whether an event is foreground or background. In this article, the former type is called referent perspective, the latter narrator perspective. In Danish Sign Language (DTS) narrator perspective is expressed by the signer’s eye contact with the addressee, the sign EN (‘one, a’) to indicate a new, prominent referent, and nonmanual signals indicating topicalization and accessibility. Referent perspective is expressed by combinations of predicates of motion and location with gaze, facial expression, and head and body orientation that represent a referent. Narratives elicited from DTS-signing adults by means of cartoons are shown to have a strong emphasis on referent perspective compared with narratives in spoken Danish elicited by means of the same cartoons. DTS-signing deaf children of six to nine years of age are shown to be well underway in acquiring the use of en, but they struggle with the expression of the referent perspective, especially the use of gaze direction and facial expression. The results are discussed in relation to Slobin’s (1996) notion rhetorical style and the role of iconicity in acquisition.


2021 ◽  
Vol 17 (1) ◽  
pp. e1008644
Author(s):  
Daniel A. Burbano-L. ◽  
Maurizio Porfiri

Understanding how animals navigate complex environments is a fundamental challenge in biology and a source of inspiration for the design of autonomous systems in engineering. Animal orientation and navigation is a complex process that integrates multiple senses, whose function and contribution are yet to be fully clarified. Here, we propose a data-driven mathematical model of adult zebrafish engaging in counter-flow swimming, an innate behavior known as rheotaxis. Zebrafish locomotion in a two-dimensional fluid flow is described within the finite-dipole model, which consists of a pair of vortices separated by a constant distance. The strength of these vortices is adjusted in real time by the fish to afford orientation and navigation control, in response to of the multi-sensory input from vision, lateral line, and touch. Model parameters for the resulting stochastic differential equations are calibrated through a series of experiments, in which zebrafish swam in a water channel under different illumination conditions. The accuracy of the model is validated through the study of a series of measures of rheotactic behavior, contrasting results of real and in-silico experiments. Our results point at a critical role of hydromechanical feedback during rheotaxis, in the form of a gradient-following strategy.


2015 ◽  
Vol 1 (1) ◽  
Author(s):  
Elisabeth Engberg-Pedersen

AbstractIn gesture studies character viewpoint and observer viewpoint (McNeill 1992) characterize co-speech gestures depending on whether the gesturer’s hand and body imitate a referent’s hand and body or the hand represents a referent in its entirety. In sign languages, handling handshapes and entity handshapes are used in depicting predicates. Narratives in Danish Sign Language (DTS) elicited to make signers describe an event from either the agent’s or the patient’s perspective demonstrate that discourse perspective is expressed by which referent, the agent or the patient, the signers represent at their own locus. This is reflected in the orientation and movement direction of the manual articulator, not by the type of representation in the articulator. Signers may also imitate the gaze direction of the referent represented at their locus or have eye contact with the addressees. When they represent a referent by their own locus and simultaneously have eye contact with the addressee, the construction mixes referent perspective and narrator perspective. This description accords with an understanding of linguistic perspective as grounded in bodily perspective within a physical scene (Sweetser 2012) and relates the deictic and attitudinal means for expressing perspective in sign languages to the way perspective is expressed in spoken languages.


2009 ◽  
Vol 108 (2) ◽  
pp. 565-572 ◽  
Author(s):  
Jason A. Williams ◽  
Erin L. Burns ◽  
Elizabeth A. Harmon

Anecdotal evidence suggests that speakers often gaze away from their listeners during sarcastic utterances; however, this question has not been directly addressed empirically. This study systematically compared gaze-direction of speakers in dyadic conversation when uttering sincere and sarcastic statements. 18 naïve participants were required to recite a series of contradictory statements on a single topic to a naïve listener, while at the same time conveying their actual opinion about this topic. This latter task could only be accomplished through prosodic or nonverbal communication by indicating sincerity or insincerity (sarcasm) for the various statements and allowed examination of gaze across the two conditions for each participant. Subsequent analysis of the videotaped interaction indicated that, during the time for the actual utterance, sarcastic utterances were accompanied by greater gaze aversion than were sincere utterances. This effect occurred for 15 of 18 participants (3 men, 15 women; M age = 19.8, SD = 1.0) who had volunteered for a small credit in an Introductory Psychology course. Results are discussed in terms of nonverbal communication and possible miscommunication which may apply given cultural differences in use of nonverbal cues.


2017 ◽  
Vol 29 (10) ◽  
pp. 1725-1738 ◽  
Author(s):  
Colin J. Palmer ◽  
Colin W. G. Clifford

The direction of others' gaze is a strong social signal to their intentions and future behavior. Pioneering electrophysiological research identified cell populations in the primate visual cortex that are tuned to specific directions of observed gaze, but the functional architecture of this system is yet to be precisely specified. Here, we develop a computational model of how others' gaze direction is flexibly encoded across sensory channels within the gaze system. We incorporate the divisive normalization of sensory responses—a computational mechanism that is thought to be widespread in sensory systems but has not been examined in the context of social vision. We demonstrate that the operation of divisive normalization in the gaze system predicts a surprising and distinctive pattern of perceptual changes after sensory adaptation to gaze stimuli and find that these predictions closely match the psychophysical effects of adaptation in human observers. We also find that opponent coding, broadband multichannel, and narrowband multichannel models of sensory coding make distinct predictions regarding the effects of adaptation in a normalization framework and find evidence in favor of broadband multichannel coding of gaze. These results reveal the functional principles that govern the neural encoding of gaze direction and support the notion that divisive normalization is a canonical feature of nervous system function. Moreover, this research provides a strong foundation for testing recent computational theories of neuropsychiatric conditions in which gaze processing is compromised, such as autism and schizophrenia.


2020 ◽  
Author(s):  
Allana L. dos S. Rocha ◽  
Leandro H. de S. Silva ◽  
Bruno J. T. Fernandes

Applications of eye-tracking devices aim to understand human activities and behaviors, improve human interactions with robots, and develop assistive technology in helping people with some communication disabilities. This paper proposes an algorithm to detect the pupil center and user’s gaze direction in real-time, using a low-resolution webcam and a conventional computer with no need for calibration. Given the constraints, the gaze space was reduced to five states: left, right, center, up, and eyes closed. A pre-existing landmarks detector was used to identify the user’s eyes. We employ image processing techniques to find the center of the pupil and we use the coordinates of the points found associated with mathematical calculations to classify the gaze direction. By using this method, the algorithm achieved 81.9% overall accuracy results even under variable and non-uniform environmental conditions. We also performed quantitative experiments with noise, blur, illumination, and rotation variation. Smart Eye Communicator, the proposed algorithm, can be used as eye-tracking mechanism to help people with communication difficulties to express their desires.


Sign in / Sign up

Export Citation Format

Share Document