scholarly journals Recognizing Facial Slivers

2018 ◽  
Vol 30 (7) ◽  
pp. 951-962 ◽  
Author(s):  
Sharon Gilad-Gutnick ◽  
Elia Samuel Harmatz ◽  
Kleovoulos Tsourides ◽  
Galit Yovel ◽  
Pawan Sinha

We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

2020 ◽  
Author(s):  
Elizabeth A. Necka ◽  
Carolyn Amir ◽  
Troy C. Dildine ◽  
Lauren Yvette Atlas

There is a robust link between patients’ expectations and clinical outcomes, as evidenced by the placebo effect. These expectations are shaped by the context surrounding treatment, including the patient-provider interaction. Prior work indicates that the provider’s behavior and characteristics, including warmth and competence, can shape patient outcomes. Yet humans rapidly form trait impressions of others prior to any in-person interaction. Here, we tested whether trait-impressions of hypothetical medical providers, based purely on facial images, influence participants’ choice of medical providers and expectations about their health following hypothetical medical procedures performed by those providers in a series of vignettes. Across five studies, participants selected providers who appeared more competent, based on facial visual information alone. Further, providers’ apparent competence predicted participants’ expectations about post-procedural pain and medication use. Participants’ perception of their similarity to providers also shaped expectations about pain and treatment outcomes. Our results suggest that humans develop expectations about their health outcomes prior to even setting foot in the clinic, based exclusively on first impressions. These findings have strong implications for health care, as individuals increasingly rely on digital services to choose healthcare providers, schedule appointments, and even receive treatment and care, a trend which is exacerbated as the world embraces telemedicine.


2016 ◽  
Vol 45 (2) ◽  
pp. 233-252
Author(s):  
Pepijn Viaene ◽  
Alain De Wulf ◽  
Philippe De Maeyer

Landmarks are ideal wayfinding tools to guide a person from A to B as they allow fast reasoning and efficient communication. However, very few path-finding algorithms start from the availability of landmarks to generate a path. In this paper, which focuses on indoor wayfinding, a landmark-based path-finding algorithm is presented in which the endpoint partition is proposed as spatial model of the environment. In this model, the indoor environment is divided into convex sub-shapes, called e-spaces, that are stable with respect to the visual information provided by a person’s surroundings (e.g. walls, landmarks). The algorithm itself implements a breadth-first search on a graph in which mutually visible e-spaces suited for wayfinding are connected. The results of a case study, in which the calculated paths were compared with their corresponding shortest paths, show that the proposed algorithm is a valuable alternative for Dijkstra’s shortest path algorithm. It is able to calculate a path with a minimal amount of actions that are linked to landmarks, while the path length increase is comparable to the increase observed when applying other path algorithms that adhere to natural wayfinding behaviour. However, the practicability of the proposed algorithm is highly dependent on the availability of landmarks and on the spatial configuration of the building.


2015 ◽  
Vol 282 (1799) ◽  
pp. 20142384 ◽  
Author(s):  
Aurore Avarguès-Weber ◽  
Adrian G. Dyer ◽  
Noha Ferrah ◽  
Martin Giurfa

Traditional models of insect vision have assumed that insects are only capable of low-level analysis of local cues and are incapable of global, holistic perception. However, recent studies on honeybee ( Apis mellifera ) vision have refuted this view by showing that this insect also processes complex visual information by using spatial configurations or relational rules. In the light of these findings, we asked whether bees prioritize global configurations or local cues by setting these two levels of image analysis in competition. We trained individual free-flying honeybees to discriminate hierarchical visual stimuli within a Y-maze and tested bees with novel stimuli in which local and/or global cues were manipulated. We demonstrate that even when local information is accessible, bees prefer global information, thus relying mainly on the object's spatial configuration rather than on elemental, local information. This preference can be reversed if bees are pre-trained to discriminate isolated local cues. In this case, bees prefer the hierarchical stimuli with the local elements previously primed even if they build an incorrect global configuration. Pre-training with local cues induces a generic attentional bias towards any local elements as local information is prioritized in the test, even if the local cues used in the test are different from the pre-trained ones. Our results thus underline the plasticity of visual processing in insects and provide new insights for the comparative analysis of visual recognition in humans and animals.


2020 ◽  
Vol 10 (9) ◽  
pp. 3066 ◽  
Author(s):  
Yuki Sakazume ◽  
Sho Furubayashi ◽  
Eizo Miyashita

An eye saccade provides appropriate visual information for motor control. The present study was aimed to reveal the role of saccades in hand movements. Two types of movements, i.e., hitting and circle-drawing movements, were adopted, and saccades during the movements were classified as either a leading saccade (LS) or catching saccade (CS) depending on the relative gaze position of the saccade to the hand position. The ratio of types of the saccades during the movements was heavily dependent on the skillfulness of the subjects. In the late phase of the movements in a less skillful subject, CS tended to occur in less precise movements, and precision of the movement tended to be improved in the subsequent movement in the hitting. While LS directing gaze to a target point was observed in both types of the movements regardless of skillfulness of the subjects, LS in between a start point and a target point, which led gaze to a local minimum variance point on a hand movement trajectory, was exclusively found in the drawing in a less skillful subject. These results suggest that LS and some types of CS may provide positional information of via-points in addition to a target point and visual information to improve precision of a feedforward controller in the brain, respectively.


2003 ◽  
Vol 15 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Ela I. Olivares ◽  
Jaime Iglesias ◽  
Socorro Rodríguez-Holguín

N400 brain event-related potential (ERP) is a mismatch negativity originally found in response to semantic incongruences of a linguistic nature and is used paradigmatically to investigate memory organization in various domains of information, including that of faces. In the present study, we analyzed different mismatch negativities evoked in N400-like paradigms related to recognition of newly learned faces with or without associated verbal information. ERPs were compared in the following conditions: (1) mismatching features (eyes-eyebrows) using a facial context corresponding to the faces learned without associated verbal information (“pure” intradomain facial processing); (2) mismatching features using a facial context corresponding to the faces learned with associated occupations and proper names (“nonpure” intradomain facial processing); (3) mismatching occupations using a facial context (cross-domain processing); and (4) mismatching names using an occupation context (intra-domain verbal processing). Results revealed that mismatching stimuli in the four conditions elicited a mismatch negativity analogous to N400 but with different timing and topo-graphical patterns. The onset of the mismatch negativity occurred earliest in Conditions 1 and 2, followed by Condition 4, and latest in Condition 3. The negativity had the shortest duration in Task 1 and the longest duration in Task 3. Bilateral parietal activity was confirmed in all conditions, in addition to a predominant right posterior temporal localization in Condition 1, a predominant right frontal localization in Condition 2, an occipital localization in Condition 3, and a more widely distributed (although with posterior predominance) localization in Condition 4. These results support the existence of multiple N400, and particularly of a nonlinguistic N400 related to purely visual information, which can be evoked by facial structure processing in the absence of verbal-semantic information.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Zahid Sadek Chowdhury ◽  
Mohammed Monzur Morshed ◽  
Mohammad Shahriar ◽  
Mohiuddin Ahmed Bhuiyan ◽  
Sardar Mohd. Ashraful Islam ◽  
...  

Alprazolam is used as an anxiolytic drug for generalized anxiety disorder and it has been reported to produce sedation and anterograde amnesia. In the current study, we randomly divided 26 healthy male volunteers into two groups: one group taking alprazolam 0.5 mg and the other taking placebo daily for two weeks. We utilized the Cambridge Neuropsychological Test Automated Battery (CANTAB) software to assess the chronic effect of alprazolam. We selected Paired Associates Learning (PAL) and Delayed Matching to Sample (DMS) tests for memory, Rapid Visual Information Processing (RVP) for attention, and Choice Reaction Time (CRT) for psychomotor performance twice: before starting the treatment and after the completion of the treatment. We found statistically significant impairment of visual memory in one parameter of PAL and three parameters of DMS in alprazolam group. The PAL mean trial to success and total correct matching in 0-second delay, 4-second delay, and all delay situation of DMS were impaired in alprazolam group. RVP total hits after two weeks of alprazolam treatment were improved in alprazolam group. But such differences were not observed in placebo group. In our study, we found that chronic administration of alprazolam affects memory but attentive and psychomotor performance remained unaffected.


2018 ◽  
Vol 30 (10) ◽  
pp. 1499-1516 ◽  
Author(s):  
Valentinos Zachariou ◽  
Zaid N. Safiullah ◽  
Leslie G. Ungerleider

The fusiform and occipital face areas (FFA and OFA) are functionally defined brain regions in human ventral occipitotemporal cortex associated with face perception. There is an ongoing debate, however, whether these regions are face-specific or whether they also facilitate the perception of nonface object categories. Here, we present evidence that, under certain conditions, bilateral FFA and OFA respond to a nonface category equivalently to faces. In two fMRI sessions, participants performed same–different judgments on two object categories (faces and chairs). In one session, participants differentiated between distinct exemplars of each category, and in the other session, participants differentiated between exemplars that differed only in the shape or spatial configuration of their features (featural/configural differences). During the latter session, the within-category similarity was comparable for both object categories. When differentiating between distinct exemplars of each category, bilateral FFA and OFA responded more strongly to faces than to chairs. In contrast, during featural/configural difference judgments, bilateral FFA and OFA responded equivalently to both object categories. Importantly, during featural/configural difference judgments, the magnitude of activity within FFA and OFA evoked by the chair task predicted the participants' behavioral performance. In contrast, when participants differentiated between distinct chair exemplars, activity within these face regions did not predict the behavioral performance of the chair task. We conclude that, when the within-category similarity of a face and a nonface category is comparable and when the same cognitive strategies used to process a face are applied to a nonface category, the FFA and OFA respond equivalently to that nonface category and faces.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 2106
Author(s):  
Yair Pinto ◽  
Edward H.F. de Haan ◽  
Maria-Chiara Villa ◽  
Sabrina Siliquini ◽  
Gabriele Polonara ◽  
...  

One of the most fundamental, and most studied, human cognitive functions is working memory. Yet, it is currently unknown how working memory is unified. In other words, why does a healthy human brain have one integrated capacity of working memory, rather than one capacity per visual hemifield, for instance. Thus, healthy subjects can memorize roughly as many items, regardless of whether all items are presented in one hemifield, rather than throughout two visual hemifields. In this current research, we investigated two patients in whom either most, or the entire, corpus callosum has been cut to alleviate otherwise untreatable epilepsy. Crucially, in both patients the anterior parts connecting the frontal and most of the parietal cortices, are entirely removed. This is essential, since it is often posited that working memory resides in these areas of the cortex. We found that despite the lack of direct connections between the frontal cortices in these patients, working memory capacity is similar regardless of whether stimuli are all presented in one visual hemifield or across two visual hemifields. This indicates that in the absence of the anterior parts of the corpus callosum working memory remains unified. Moreover, it is important to note that memory performance was not similar across visual fields. In fact, capacity was higher when items appeared in the left visual hemifield than when they appeared in the right visual hemifield. Visual information in the left hemifield is processed by the right hemisphere and vice versa. Therefore, this indicates that visual working memory is not symmetric, with the right hemisphere having a superior visual working memory. Nonetheless, a (subcortical) bottleneck apparently causes visual working memory to be integrated, such that capacity does not increase when items are presented in two, rather than one, visual hemifield.


Sign in / Sign up

Export Citation Format

Share Document