Do animacy effects persist in memory for context?

2018 ◽  
Vol 71 (4) ◽  
pp. 965-974 ◽  
Author(s):  
Margaux Gelin ◽  
Patrick Bonin ◽  
Alain Méot ◽  
Aurélia Bugaiska

The adaptive view of human memory assumes that animates (e.g, rabbit) are remembered better than inanimates (e.g. glass) because animates are ultimately more important for fitness than inanimates. Previous studies provided evidence for this view by showing that animates were recalled or recognized better than inanimates, but they did not assess memory for contextual details (e.g., where animates vs inanimates occurred). In this study, we tested recollection of spatial information (Study 1) and temporal information (Study 2) associated with animate versus inanimate words. The findings showed that the two types of contextual information were remembered better when they were related to animates than to inanimates. These findings provide further evidence for an ultimate explanation of animacy effects.

Author(s):  
Patrick Bonin ◽  
Margaux Gelin ◽  
Betty Laroche ◽  
Alain Méot ◽  
Aurélia Bugaiska

Abstract. Animates are better remembered than inanimates. According to the adaptive view of human memory ( Nairne, 2010 ; Nairne & Pandeirada, 2010a , 2010b ), this observation results from the fact that animates are more important for survival than inanimates. This ultimate explanation of animacy effects has to be complemented by proximate explanations. Moreover, animacy currently represents an uncontrolled word characteristic in most cognitive research ( VanArsdall, Nairne, Pandeirada, & Cogdill, 2015 ). In four studies, we therefore investigated the “how” of animacy effects. Study 1 revealed that words denoting animates were recalled better than those referring to inanimates in an intentional memory task. Study 2 revealed that adding a concurrent memory load when processing words for the animacy dimension did not impede the animacy effect on recall rates. Study 3A was an exact replication of Study 2 and Study 3B used a higher concurrent memory load. In these two follow-up studies, animacy effects on recall performance were again not altered by a concurrent memory load. Finally, Study 4 showed that using interactive imagery to encode animate and inanimate words did not alter the recall rate of animate words but did increase the recall of inanimate words. Taken together, the findings suggest that imagery processes contribute to these effects.


1988 ◽  
Vol 53 (3) ◽  
pp. 316-327 ◽  
Author(s):  
Alan G. Kamhi ◽  
Hugh W. Catts ◽  
Daria Mauer ◽  
Kenn Apel ◽  
Betholyn F. Gentry

In the present study, we further examined (see Kamhi & Catts, 1986) the phonological processing abilities of language-impaired (LI) and reading-impaired (RI) children. We also evaluated these children's ability to process spatial information. Subjects were 10 LI, 10 RI, and 10 normal children between the ages of 6:8 and 8:10 years. Each subject was administered eight tasks: four word repetition tasks (monosyllabic, monosyllabic presented in noise, three-item, and multisyllabic), rapid naming, syllable segmentation, paper folding, and form completion. The normal children performed significantly better than both the LI and RI children on all but two tasks: syllable segmentation and repeating words presented in noise. The LI and RI children performed comparably on every task with the exception of the multisyllabic word repetition task. These findings were consistent with those from our previous study (Kamhi & Catts, 1986). The similarities and differences between LI and RI children are discussed.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 708
Author(s):  
Wenbo Liu ◽  
Fei Yan ◽  
Jiyong Zhang ◽  
Tao Deng

The quality of detected lane lines has a great influence on the driving decisions of unmanned vehicles. However, during the process of unmanned vehicle driving, the changes in the driving scene cause much trouble for lane detection algorithms. The unclear and occluded lane lines cannot be clearly detected by most existing lane detection models in many complex driving scenes, such as crowded scene, poor light condition, etc. In view of this, we propose a robust lane detection model using vertical spatial features and contextual driving information in complex driving scenes. The more effective use of contextual information and vertical spatial features enables the proposed model more robust detect unclear and occluded lane lines by two designed blocks: feature merging block and information exchange block. The feature merging block can provide increased contextual information to pass to the subsequent network, which enables the network to learn more feature details to help detect unclear lane lines. The information exchange block is a novel block that combines the advantages of spatial convolution and dilated convolution to enhance the process of information transfer between pixels. The addition of spatial information allows the network to better detect occluded lane lines. Experimental results show that our proposed model can detect lane lines more robustly and precisely than state-of-the-art models in a variety of complex driving scenarios.


2021 ◽  
Vol 10 (3) ◽  
pp. 166
Author(s):  
Hartmut Müller ◽  
Marije Louwsma

The Covid-19 pandemic put a heavy burden on member states in the European Union. To govern the pandemic, having access to reliable geo-information is key for monitoring the spatial distribution of the outbreak over time. This study aims to analyze the role of spatio-temporal information in governing the pandemic in the European Union and its member states. The European Nomenclature of Territorial Units for Statistics (NUTS) system and selected national dashboards from member states were assessed to analyze which spatio-temporal information was used, how the information was visualized and whether this changed over the course of the pandemic. Initially, member states focused on their own jurisdiction by creating national dashboards to monitor the pandemic. Information between member states was not aligned. Producing reliable data and timeliness reporting was problematic, just like selecting indictors to monitor the spatial distribution and intensity of the outbreak. Over the course of the pandemic, with more knowledge about the virus and its characteristics, interventions of member states to govern the outbreak were better aligned at the European level. However, further integration and alignment of public health data, statistical data and spatio-temporal data could provide even better information for governments and actors involved in managing the outbreak, both at national and supra-national level. The Infrastructure for Spatial Information in Europe (INSPIRE) initiative and the NUTS system provide a framework to guide future integration and extension of existing systems.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Avner Wallach ◽  
Erik Harvey-Girard ◽  
James Jaeyoon Jun ◽  
André Longtin ◽  
Len Maler

Learning the spatial organization of the environment is essential for most animals’ survival. This requires the animal to derive allocentric spatial information from egocentric sensory and motor experience. The neural mechanisms underlying this transformation are mostly unknown. We addressed this problem in electric fish, which can precisely navigate in complete darkness and whose brain circuitry is relatively simple. We conducted the first neural recordings in the preglomerular complex, the thalamic region exclusively connecting the optic tectum with the spatial learning circuits in the dorsolateral pallium. While tectal topographic information was mostly eliminated in preglomerular neurons, the time-intervals between object encounters were precisely encoded. We show that this reliable temporal information, combined with a speed signal, can permit accurate estimation of the distance between encounters, a necessary component of path-integration that enables computing allocentric spatial relations. Our results suggest that similar mechanisms are involved in sequential spatial learning in all vertebrates.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1298
Author(s):  
Nan Zhao ◽  
Dawei Lu ◽  
Kechen Hou ◽  
Meifei Chen ◽  
Xiangyu Wei ◽  
...  

With the increasing pressure of current life, fatigue caused by high-pressure work has deeply affected people and even threatened their lives. In particular, fatigue driving has become a leading cause of traffic accidents and deaths. This paper investigates electroencephalography (EEG)-based fatigue detection for driving by mining the latent information through the spatial-temporal changes in the relations between EEG channels. First, EEG data are partitioned into several segments to calculate the covariance matrices of each segment, and then we feed these matrices into a recurrent neural network to obtain high-level temporal information. Second, the covariance matrices of whole signals are leveraged to extract two kinds of spatial features, which will be fused with temporal characteristics to obtain comprehensive spatial-temporal information. Experiments on an open benchmark showed that our method achieved an excellent classification accuracy of 93.834% and performed better than several novel methods. These experimental results indicate that our method enables better reliability and feasibility in the detection of fatigued driving.


2020 ◽  
Vol 39 (3) ◽  
pp. 3769-3781
Author(s):  
Zhisong Han ◽  
Yaling Liang ◽  
Zengqun Chen ◽  
Zhiheng Zhou

Video-based person re-identification aims to match videos of pedestrians captured by non-overlapping cameras. Video provides spatial information and temporal information. However, most existing methods do not combine these two types of information well and ignore that they are of different importance in most cases. To address the above issues, we propose a two-stream network with a joint distance metric for measuring the similarity of two videos. The proposed two-stream network has several appealing properties. First, the spatial stream focuses on multiple parts of a person and outputs robust local spatial features. Second, a lightweight and effective temporal information extraction block is introduced in video-based person re-identification. In the inference stage, the distance of two videos is measured by the weighted sum of spatial distance and temporal distance. We conduct extensive experiments on four public datasets, i.e., MARS, PRID2011, iLIDS-VID and DukeMTMC-VideoReID to show that our proposed approach outperforms existing methods in video-based person re-ID.


1981 ◽  
Vol 75 (2) ◽  
pp. 46-49 ◽  
Author(s):  
Janet F. Fletcher

Theories of spatial representation in blind people have focused on the type of representation of which they, as a group, are capable. This approach overlooks an important issue, namely, the differences among individual blind people and the effects that these differences have on the way spatial information is represented. Data from another article by the author on the same study of spatial representation in blind children were subjected to two step-wise regression analyses to determine the relationships between several subject-related variables and responses to “map” (cognitive map) and “route” (sequential memory) questions about the position of furniture in a recently explored room. The independent variables accounted for 70 percent of the variance on map questions but only 46 percent of the variance on route questions. On map questions, general intellectual ability correlated positively with performance (p < .01), children with visual acuity better than light perception in the first 3 years of life performed better than those with less early vision (p < .05), and children who became blind from retrolental fibroplasia performed more poorly than those whose blindness was due to other causes (p < .05). Fewer independent variables contributed to the variance in performance on route questions. Again children with visual acuity better than light perception in their first 3 years performed better than those with less early vision.


2013 ◽  
Vol 62 (1) ◽  
pp. 33-49 ◽  
Author(s):  
Pattathal V. Arun ◽  
Sunil K. Katiyar

Abstract Image registration is a key component of various image processing operations which involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however inability to properly model object shape as well as contextual information had limited the attainable accuracy. In this paper, we propose a framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as Vector Machines, Cellular Neural Network (CNN), SIFT, coreset, and Cellular Automata. CNN has found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using corset optimization The salient features of this work are cellular neural network approach based SIFT feature point optimisation, adaptive resampling and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. System has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. Methodology also illustrated to be effective in providing intelligent interpretation and adaptive resampling.


Sign in / Sign up

Export Citation Format

Share Document