scholarly journals Real-world expectations and their affective value modulate object processing

2018 ◽  
Author(s):  
Laurent Caplette ◽  
Frédéric Gosselin ◽  
Martial Mermillod ◽  
et Bruno Wicker

AbstractIt is well known that expectations influence how we perceive the world. Yet the neural mechanisms underlying this process remain unclear. Studies about the effects of prior expectations have focused so far on artificial contingencies between simple neutral cues and events. Real-world expectations are however often generated from complex associations between contexts and objects learned over a lifetime. Additionally, these expectations may contain some affective value and recent proposals present conflicting hypotheses about the mechanisms underlying affect in predictions. In this study, we used fMRI to investigate how object processing is influenced by realistic context-based expectations, and how affect impacts these expectations. First, we show that the precuneus, the inferotemporal cortex and the frontal cortex are more active during object recognition when expectations have been elicited a priori, irrespectively of their validity or their affective intensity. This result supports previous hypotheses according to which these brain areas integrate contextual expectations with object sensory information. Notably, these brain areas are different from those responsible for simultaneous context-object interactions, dissociating the two processes. Then, we show that early visual areas, on the contrary, are more active during object recognition when no prior expectation has been elicited by a context. Lastly, BOLD activity was shown to be enhanced in early visual areas when objects are less expected, but only when contexts are neutral; the reverse effect is observed when contexts are affective. This result supports the proposal that affect modulates the weighting of sensory information during predictions. Together, our results help elucidate the neural mechanisms of real-world expectations.

NeuroImage ◽  
2020 ◽  
Vol 213 ◽  
pp. 116736
Author(s):  
Laurent Caplette ◽  
Frédéric Gosselin ◽  
Martial Mermillod ◽  
Bruno Wicker

Author(s):  
S. Unsal ◽  
A. Shirkhodaie ◽  
A. H. Soni

Abstract Adding sensing capability to a robot provides the robot with intelligent perception capability and flexibility of decision making. To perform intelligent tasks, robots are highly required to perceive their operating environment, and react accordingly. With this regard, tactile sensors offer to extend the scope of intelligence of a robot for performing tasks which require object touching, recognition, and manipulation. This paper presents the design of an inexpensive pneumatic binary-array tactile sensor for such robotic applications. The paper describes some of the techniques implemented for object recognition from binary sensory information. Furthermore, it details the development of software and hardware which facilitate the sensor to provide useful information to a robot so that the robot perceives its operating environment during manipulation of objects.


Author(s):  
Koji Kamei ◽  
Yutaka Yanagisawa ◽  
Takuya Maekawa ◽  
Yasue Kishino ◽  
Yasushi Sakurai ◽  
...  

The construction of real-world knowledge is required if we are to understand real-world events that occur in a networked sensor environment. Since it is difficult to select suitable ‘events’ for recognition in a sensor environment a priori, we propose an incremental model for constructing real-world knowledge. Labeling is the central plank of the proposed model because the model simultaneously improves both the ontology of real-world events and the implementation of a sensor system based on a manually labeled event corpus. A labeling tool is developed in accordance with the model and is evaluated in a practical labeling experiment.


Author(s):  
Pierpaolo Sorrentino ◽  
Michele Ambrosanio ◽  
Rosaria Rucco ◽  
Fabio Baselice

Abstract Background Brain areas need to coordinate their activity in order to enable complex behavioral responses. Synchronization is one of the mechanisms neural ensembles use to communicate. While synchronization between signals operating at similar frequencies is fairly straightforward, the estimation of synchronization occurring between different frequencies of oscillations has proven harder to capture. One specifically hard challenge is to estimate cross-frequency synchronization between broadband signals when no a priori hypothesis is available about the frequencies involved in the synchronization. Methods In the present manuscript, we expand upon the phase linearity measurement, an iso-frequency synchronization metrics previously developed by our group, in order to provide a conceptually similar approach able to detect the presence of cross-frequency synchronization between any components of the analyzed broadband signals. Results The methodology has been tested on both synthetic and real data. We first exploited Gaussian process realizations in order to explore the properties of our new metrics in a synthetic case study. Subsequently, we analyze real source-reconstructed data acquired by a magnetoencephalographic system from healthy controls in a clinical setting to study the performance of our metrics in a realistic environment. Conclusions In the present paper we provide an evolution of the PLM methodology able to reveal the presence of cross-frequency synchronization between broadband data.


2015 ◽  
Vol 53 ◽  
pp. 428-436 ◽  
Author(s):  
Zejia Zheng ◽  
Xie He ◽  
Juyang Weng

2012 ◽  
Vol 25 (0) ◽  
pp. 122
Author(s):  
Michael Barnett-Cowan ◽  
Jody C. Culham ◽  
Jacqueline C. Snow

The orientation at which objects are most easily recognized — the perceptual upright (PU) — is influenced by body orientation with respect to gravity. To date, the influence of these cues on object recognition has only been measured within the visual system. Here we investigate whether objects explored through touch alone are similarly influenced by body and gravitational information. Using the Oriented CHAracter Recognition Test (OCHART) adapted for haptics, blindfolded right-handed observers indicated whether the symbol ‘p’ presented in various orientations was the letter ‘p’ or ‘d’ following active touch. The average of ‘p-to-d’ and ‘d-to-p’ transitions was taken as the haptic PU. Sensory information was manipulated by positioning observers in different orientations relative to gravity with the head, body, and hand aligned. Results show that haptic object recognition is equally influenced by body and gravitational references frames, but with a constant leftward bias. This leftward bias in the haptic PU resembles leftward biases reported for visual object recognition. The influence of body orientation and gravity on the haptic PU was well predicted by an equally weighted vectorial sum of the directions indicated by these cues. Our results demonstrate that information from different reference frames influence the perceptual upright in haptic object recognition. Taken together with similar investigations in vision, our findings suggest that reliance on body and gravitational frames of reference helps maintain optimal object recognition. Equally relying on body and gravitational information may facilitate haptic exploration with an upright posture, while compensating for poor vestibular sensitivity when tilted.


2002 ◽  
Vol 87 (6) ◽  
pp. 3102-3116 ◽  
Author(s):  
Galia Avidan ◽  
Michal Harel ◽  
Talma Hendler ◽  
Dafna Ben-Bashat ◽  
Ehud Zohary ◽  
...  

An important characteristic of visual perception is the fact that object recognition is largely immune to changes in viewing conditions. This invariance is obtained within a sequence of ventral stream visual areas beginning in area V1 and ending in high order occipito-temporal object areas (the lateral occipital complex, LOC). Here we studied whether this transformation could be observed in the contrast response of these areas. Subjects were presented with line drawings of common objects and faces in five different contrast levels (0, 4, 6, 10, and 100%). Our results show that indeed there was a gradual trend of increasing contrast invariance moving from area V1, which manifested high sensitivity to contrast changes, to the LOC, which showed a significantly higher degree of invariance at suprathreshold contrasts (from 10 to 100%). The trend toward increased invariance could be observed for both face and object images; however, it was more complete for the face images, while object images still manifested substantial sensitivity to contrast changes. Control experiments ruled out the involvement of attention effects or hemodynamic “ceiling” in producing the contrast invariance. The transition from V1 to LOC was gradual with areas along the ventral stream becoming increasingly contrast-invariant. These results further stress the hierarchical and gradual nature of the transition from early retinotopic areas to high order ones, in the build-up of abstract object representations.


Author(s):  
SANTANU CHAUDHURY ◽  
ARBIND GUPTA ◽  
GUTURU PARTHASARATHY ◽  
S. SUBRAMANIAN

This paper describes an abductive reasoning based inferencing engine for image interpretation. The inferencing strategy finds an acceptable and consistent explanation of the features detected in the image in terms of the objects known a priori. The inferencing scheme assumes representation of the domain knowledge about the objects in terms of local and/or relational features. The inferencing system can be applied for different types of image interpretation problems like 2-D and 3-D object recognition, aerial image interpretation, etc. In this paper, we illustrate functioning of the system with the help of a 2-D object recognition problem.


Sign in / Sign up

Export Citation Format

Share Document