Visual and Verbal Codes: Spatial Information Survives the Icon

1972 ◽  
Vol 24 (4) ◽  
pp. 439-447 ◽  
Author(s):  
Leslie Henderson

Three experiments are reported to demonstrate partial independence of identity and spatial position information concerning visually presented symbols. Experiment I shows that performance on these forms of information improves at different rates as a function of exposure duration. Experiment II shows that performance on one can be traded against the other. Experiment III demonstrates partial statistical independence of item and position responses and shows that increases in the duration and delay of the probe facilitate performance. Some implications of these experiments for theories involving mandatory verbal encoding of visual symbol-arrays are discussed. A model is proposed comprising visual and name stores with different acquisition rates and capacities. Both are indexed by identity but the visual code can also be interrogated by spatial cues.

1974 ◽  
Vol 26 (2) ◽  
pp. 196-205 ◽  
Author(s):  
A. H. Winefield

The performance of rats over 12 brightness discrimination reversals was studied under two experimental conditions. Under one condition all visual cues external to the apparatus were eliminated so that only the relative positions of the discriminanda could serve as a visual cue to spatial position. Under the other condition all visual cues to position were eliminated. Under the former condition performance deteriorated with successive reversals but under the latter condition performance improved. Implications of these results for theories of successive reversal improvement were discussed and two possible explanations were suggested.


2021 ◽  
Vol 13 (22) ◽  
pp. 4533
Author(s):  
Kai Hu ◽  
Dongsheng Zhang ◽  
Min Xia

Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics of clouds, so it is easy to lose the spatial location information in the high-reflection area, resulting in misclassification. Besides, due to the lack of generalization, the traditional deep-learning network also easily loses the details and spatial information if it is directly applied to cloud detection. In order to solve these problems, we propose a deep-learning model, Cloud Detection UNet (CDUNet), for cloud detection. The characteristics of the network are that it can refine the division boundary of the cloud layer and capture its spatial position information. In the proposed model, we introduced a High-frequency Feature Extractor (HFE) and a Multiscale Convolution (MSC) to refine the cloud boundary and predict fragmented clouds. Moreover, in order to improve the accuracy of thin cloud detection, the Spatial Prior Self-Attention (SPSA) mechanism was introduced to establish the cloud spatial position information. Additionally, a dual-attention mechanism is proposed to reduce the proportion of redundant information in the model and improve the overall performance of the model. The experimental results showed that our model can cope with complex cloud cover scenes and has excellent performance on cloud datasets and SPARCS datasets. Its segmentation accuracy is better than the existing methods, which is of great significance for cloud-detection-related work.


Genes ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 1385
Author(s):  
Mingyang Zhang ◽  
Yujia Hu ◽  
Min Zhu

Enhancer-promoter interactions (EPIs) play a significant role in the regulation of gene transcription. However, enhancers may not necessarily interact with the closest promoters, but with distant promoters via chromatin looping. Considering the spatial position relationship between enhancers and their target promoters is important for predicting EPIs. Most existing methods only consider sequence information regardless of spatial information. On the other hand, recent computational methods lack generalization capability across different cell line datasets. In this paper, we propose EPIsHilbert, which uses Hilbert curve encoding and two transfer learning approaches. Hilbert curve encoding can preserve the spatial position information between enhancers and promoters. Additionally, we use visualization techniques to explore important sequence fragments that have a high impact on EPIs and the spatial relationships between them. Transfer learning can improve prediction performance across cell lines. In order to further prove the effectiveness of transfer learning, we analyze the sequence coincidence of different cell lines. Experimental results demonstrate that EPIsHilbert is a state-of-the-art model that is superior to most of the existing methods both in specific cell lines and cross cell lines.


Author(s):  
Odo Diekmann ◽  
Hans Heesterbeek ◽  
Tom Britton

This chapter considers the case of individuals who differ from each other with respect to traits that are relevant for the transmission of an infectious agent. How do we describe the spread of the agent? How do we quantify the infectivity? What happens in the initial phase? Can we characterize the final size? Examples of the “traits” we have in mind are age, sex, sexual activity level, sexual disposition, and spatial position. So a trait may be static or dynamic, it may be discrete or continuous. Traits are considered as i-states, where “i” means “individual” and where “state” signifies that the current value together with the environmental input in the intervening period completely determines future behavior. The heterogeneity of individuals is classified in terms of a component, h-state, of their i-state, while the other component, d-state, summarizes all relevant information about output of infectious material.


2021 ◽  
Vol 92 (12) ◽  
pp. 956-961
Author(s):  
Hector D. Garcia

INTRODUCTION:The Spacecraft Maximum Allowable Concentrations (SMACs) for C2-C9 alkanes set by NASA in 2008 under the guidance and approval of the National Research Council specifically excluded SMACs for n-hexane. Unlike other C2-C9 alkanes, n-hexane can cause polyneuropathy after metabolism in humans or rodents and so requires more stringent SMACs than the other members of this group do. This document reviews the relevant published studies of n-hexane toxicity to develop exposure duration-specific SMACs for n-hexane of 200 ppm for 1 hour, 30 ppm for 24 hours, and 2.4 ppm for 7 days, 30 days, 180 days, and 1000 days.Garcia HD. Acceptable limits for n-hexane in spacecraft atmospheres. Aerosp Med Hum Perform. 2021; 92(12):956–961.


2020 ◽  
Vol 66 (6) ◽  
pp. 643-648
Author(s):  
Li-Fang Gao ◽  
Wen Zhang ◽  
Hai-Yang Zhang ◽  
Zhen-Qin Zhu ◽  
Xiao-Dan Zhang ◽  
...  

Abstract In altricial birds, to address which cues are used by parents to recognize their offspring, and when they switch between cues during reproduction, it has not been well determined. In this study, we address this question in a Tibetan population of the azure-winged magpie Cyanopica cyanus, by examining the dependence of parents on a nest’s spatial position in offspring recognition. During the egg and nestling phases, azure-winged magpie nests were translocated to new positions across various distances from their original site, and parental responses to the translocated nests were investigated. Our findings show that a nest’s spatial position is not connected with the survival of its young, but might be used as a cue in parental offspring recognition. When nests are translocated to a new position within a certain distance, parents could recognize their nests and returned to resume their parenting behaviors. Parental dependence on the nest’s spatial position in offspring recognition is higher during the egg phase than during the nestling phase, and it decreases with the growth of nestlings. After nestlings reach a certain age, the nest’ s spatial position was no longer used by parents as the single cue for offspring recognition. These findings suggest that azure-winged magpies switch their cues in offspring recognition during the different stages of reproduction. After parent–offspring communication has been established, the offspring’s phenotypic traits may become a more reliable cue than the nest’s spatial position in offspring recognition.


1997 ◽  
Vol 8 (3) ◽  
pp. 224-230 ◽  
Author(s):  
Rick O. Gilmore ◽  
Mark H. Johnson

The extent to which infants combine visual (i e, retinal position) and nonvisual (eye or head position) spatial information in planning saccades relates to the issue of what spatial frame or frames of reference influence early visually guided action We explored this question by testing infants from 4 to 6 months of age on the double-step saccade paradigm, which has shown that adults combine visual and eye position information into an egocentric (head- or trunk-centered) representation of saccade target locations In contrast, our results imply that infants depend on a simple retinocentric representation at age 4 months, but by 6 months use egocentric representations more often to control saccade planning Shifts in the representation of visual space for this simple sensorimotor behavior may index maturation in cortical circuitry devoted to visual spatial processing in general


2003 ◽  
Vol 27 (3) ◽  
pp. 193-200 ◽  
Author(s):  
Amedeo D’Angiulli ◽  
Stefania Maggi

We studied the development of spontaneous tactile drawing in three 12-year-old children with congenital total blindness and with no previous drawing tuition. In a period of 9 months, from an initial phase in which they were taught to draw tangible straight and curve raised lines, the three blind children went on making spontaneous raised outlines representing edges, surfaces of objects, vantage point, and motion. The corpus of drawings produced by these children shows that several aspects of outline pictures can be implemented through touch. The perceptual principles represented in these drawings are comparable to those commonly found in sighted children. On the one hand, this convergence indicates similarities in the way vision and touch mediate the acquisition and the conceptualisation of spatial information from objects and the environment. On the other hand, it reflects the influence of cross-modal plasticity typically associated with early or congenital blindness. This study suggests that drawing development in general does not depend on learning pictorial conventions. Rather it seems driven by natural generativity based on children’s knowledge of space and perceptual principles.


2017 ◽  
Vol 114 (5) ◽  
pp. E717-E726 ◽  
Author(s):  
Jeremy S. Rabinowitz ◽  
Aaron M. Robitaille ◽  
Yuliang Wang ◽  
Catherine A. Ray ◽  
Ryan Thummel ◽  
...  

Regeneration requires cells to regulate proliferation and patterning according to their spatial position. Positional memory is a property that enables regenerating cells to recall spatial information from the uninjured tissue. Positional memory is hypothesized to rely on gradients of molecules, few of which have been identified. Here, we quantified the global abundance of transcripts, proteins, and metabolites along the proximodistal axis of caudal fins of uninjured and regenerating adult zebrafish. Using this approach, we uncovered complex overlapping expression patterns for hundreds of molecules involved in diverse cellular functions, including development, bioelectric signaling, and amino acid and lipid metabolism. Moreover, 32 genes differentially expressed at the RNA level had concomitant differential expression of the encoded proteins. Thus, the identification of proximodistal differences in levels of RNAs, proteins, and metabolites will facilitate future functional studies of positional memory during appendage regeneration.


Sign in / Sign up

Export Citation Format

Share Document