scholarly journals Evaluation and Recall of Valenced Stimuli as a Function of Spatial Positions

2021 ◽  
Author(s):  
◽  
Gary C. H. Hewson

<p>Meier and Robinson (2004) had subjects identify pleasant and unpleasant words presented individually either at the top or bottom of a computer screen. Subjects identified pleasant words faster when they appeared at the top of the screen and unpleasant words faster whey they appeared at the bottom of the screen. The authors discussed this finding in terms of metaphors noting that in language good things are often allocated upwards (e.g. “things are looking up for me”) and bad things downwards e.g. (“I’m down in the dumps”). The aim of the present study was to investigate whether this relationship between affective stimuli and visual space occurs automatically (implicitly) or whether explicit processing of affective stimuli is required. A second aim was to investigate if memory for affective words is influenced by spatial location. In Experiments 1 and 2 subjects were shown pleasant and unpleasant words presented either at the top or bottom of a computer screen. Half the words were coloured green and half coloured purple. Subjects had to identify the colour as quickly as possible. No significant interaction between stimulus valence and spatial position was found, nor did recall interact with spatial position. In Experiment 3 subjects had to explicitly identify the valence of the words shown either at the top or bottom of the screen. It was predicted that positive stimuli would be explicitly evaluated faster and recalled more accurately when shown at the top of the screen, with the opposite holding true for negative stimuli. Participants were quicker to identify positive words at the top of the screen. Recall did not interact with spatial position. Overall the results of this study were broadly supportive of the hypothesis for explicit evaluation but not so for implicit evaluation or recall.</p>

2021 ◽  
Author(s):  
◽  
Gary C. H. Hewson

<p>Meier and Robinson (2004) had subjects identify pleasant and unpleasant words presented individually either at the top or bottom of a computer screen. Subjects identified pleasant words faster when they appeared at the top of the screen and unpleasant words faster whey they appeared at the bottom of the screen. The authors discussed this finding in terms of metaphors noting that in language good things are often allocated upwards (e.g. “things are looking up for me”) and bad things downwards e.g. (“I’m down in the dumps”). The aim of the present study was to investigate whether this relationship between affective stimuli and visual space occurs automatically (implicitly) or whether explicit processing of affective stimuli is required. A second aim was to investigate if memory for affective words is influenced by spatial location. In Experiments 1 and 2 subjects were shown pleasant and unpleasant words presented either at the top or bottom of a computer screen. Half the words were coloured green and half coloured purple. Subjects had to identify the colour as quickly as possible. No significant interaction between stimulus valence and spatial position was found, nor did recall interact with spatial position. In Experiment 3 subjects had to explicitly identify the valence of the words shown either at the top or bottom of the screen. It was predicted that positive stimuli would be explicitly evaluated faster and recalled more accurately when shown at the top of the screen, with the opposite holding true for negative stimuli. Participants were quicker to identify positive words at the top of the screen. Recall did not interact with spatial position. Overall the results of this study were broadly supportive of the hypothesis for explicit evaluation but not so for implicit evaluation or recall.</p>


2019 ◽  
Author(s):  
Chris Robert Harrison Brown

Attention has long been characterised within prominent models as reflecting a competition between goal-driven and stimulus-driven processes. It remains unclear, however, how involuntary attentional capture by affective stimuli, such as threat-laden content, fits into such models. While such effects were traditionally held to reflect stimulus-driven processes, recent research has increasingly implicated a critical role of goal-driven processes. Here we test an alternative goal-driven account of involuntary attentional capture by threat, using an experimental manipulation of goal-driven attention. To this end we combined the classic ‘contingent capture’ and ‘emotion-induced blink’ (EIB) paradigms in an RSVP task with both positive or threatening target search goals. Across six experiments, positive and threat distractors were presented in peripheral, parafoveal, and central locations. Across all distractor locations, we found that involuntary attentional capture by irrelevant threatening distractors could be induced via the adoption of a search goal for a threatening category; adopting a goal for a positive category conversely led to capture only by positive stimuli. Our findings provide direct experimental evidence for a causal role of voluntary goals in involuntary capture by irrelevant threat stimuli, and hence demonstrate the plausibility of a top-down account of this phenomenon. We discuss the implications of these findings in relation to current cognitive models of attention and clinical disorders.


2021 ◽  
pp. 216770262110380
Author(s):  
Elizabeth C. Wade ◽  
Rivka T. Cohen ◽  
Paddy Loftus ◽  
Ayelet Meron Ruscio

Perseverative thinking (PT), or repetitive negative thinking, has historically been measured using global self-report scales. New methods of assessment are needed to advance understanding of this inherently temporal process. We developed an intensive longitudinal method for assessing PT. A mixed sample of 77 individuals ranging widely in trait PT, including persons with PT-related disorders (generalized anxiety disorder, major depression) and persons without psychopathology, used a joystick to provide continuous ratings of thought valence and intensity following exposure to scenarios of differing valence. Joystick responses were robustly predicted by trait PT, clinical status, and stimulus valence. Higher trait perseverators exhibited more extreme joystick values overall, greater stability in values following threatening and ambiguous stimuli, weaker stability in values following positive stimuli, and greater inertia in values following ambiguous stimuli. The joystick method is a promising measure with the potential to shed new light on the dynamics and precipitants of perseverative thinking.


2021 ◽  
Vol 13 (22) ◽  
pp. 4533
Author(s):  
Kai Hu ◽  
Dongsheng Zhang ◽  
Min Xia

Cloud detection is a key step in the preprocessing of optical satellite remote sensing images. In the existing literature, cloud detection methods are roughly divided into threshold methods and deep-learning methods. Most of the traditional threshold methods are based on the spectral characteristics of clouds, so it is easy to lose the spatial location information in the high-reflection area, resulting in misclassification. Besides, due to the lack of generalization, the traditional deep-learning network also easily loses the details and spatial information if it is directly applied to cloud detection. In order to solve these problems, we propose a deep-learning model, Cloud Detection UNet (CDUNet), for cloud detection. The characteristics of the network are that it can refine the division boundary of the cloud layer and capture its spatial position information. In the proposed model, we introduced a High-frequency Feature Extractor (HFE) and a Multiscale Convolution (MSC) to refine the cloud boundary and predict fragmented clouds. Moreover, in order to improve the accuracy of thin cloud detection, the Spatial Prior Self-Attention (SPSA) mechanism was introduced to establish the cloud spatial position information. Additionally, a dual-attention mechanism is proposed to reduce the proportion of redundant information in the model and improve the overall performance of the model. The experimental results showed that our model can cope with complex cloud cover scenes and has excellent performance on cloud datasets and SPARCS datasets. Its segmentation accuracy is better than the existing methods, which is of great significance for cloud-detection-related work.


Author(s):  
Diana Sarita Hamburger

This chapter discusses the importance of Information Technology to spatial location and city competitiveness. Understanding spatial location in urban networks can be informed by economic geography concepts, especially those with insights into how urban areas form and develop. Relative distance to markets and the flow of goods and services influences the spatial position of cities, shapes how urban settlements evolve, and helps explain their distribution. Concepts like accessibility and centrality—and strategies for measuring them—can be used to determine a good place to locate a business or transportation hub. This chapter makes a case for the importance of considering information utilities, especially telecommunication networks, as important part of economic geography, and ultimately the growth and competitiveness of cities.


2017 ◽  
Vol 34 ◽  
Author(s):  
REECE MAZADE ◽  
JOSE MANUEL ALONSO

AbstractVisual information reaches the cerebral cortex through a major thalamocortical pathway that connects the lateral geniculate nucleus (LGN) of the thalamus with the primary visual area of the cortex (area V1). In humans, ∼3.4 million afferents from the LGN are distributed within a V1 surface of ∼2400 mm2, an afferent number that is reduced by half in the macaque and by more than two orders of magnitude in the mouse. Thalamocortical afferents are sorted in visual cortex based on the spatial position of their receptive fields to form a map of visual space. The visual resolution within this map is strongly correlated with total number of thalamic afferents that V1 receives and the area available to sort them. The ∼20,000 afferents of the mouse are only sorted by spatial position because they have to cover a large visual field (∼300 deg) within just 4 mm2 of V1 area. By contrast, the ∼500,000 afferents of the cat are also sorted by eye input and light/dark polarity because they cover a smaller visual field (∼200 deg) within a much larger V1 area (∼400 mm2), a sorting principle that is likely to apply also to macaques and humans. The increased precision of thalamic sorting allows building multiple copies of the V1 visual map for left/right eyes and light/dark polarities, which become interlaced to keep neurons representing the same visual point close together. In turn, this interlaced arrangement makes cortical neurons with different preferences for stimulus orientation to rotate around single cortical points forming a pinwheel pattern that allows more efficient processing of objects and visual textures.


2021 ◽  
Author(s):  
Daniel Birman ◽  
Justin L. Gardner

AbstractHuman observers use cues to guide visual attention to the most behaviorally relevant parts of the visual world. Cues are often separated into two forms: those that rely on spatial location and those that use features, such as motion or color. These forms of cueing are known to rely on different populations of neurons. Despite these differences in neural implementation, attention may rely on shared computational principles, enhancing and selecting sensory representations in a similar manner for all types of cues. Here we examine whether evidence for shared computational mechanisms can be obtained from how attentional cues enhance performance in estimation tasks. In our tasks, observers were cued either by spatial location or feature to two of four dot patches. They then estimated the color or motion direction of one of the cued patches, or averaged them. In all cases we found that cueing improved performance. We decomposed the effects of the cues on behavior into model parameters that separated sensitivity enhancement from sensory selection and found that both were important to explain improved performance. We found that a model which shared parameters across forms of cueing was favored by our analysis, suggesting that observers have equal sensitivity and likelihood of making selection errors whether cued by location or feature. Our perceptual data support theories in which a shared computational mechanism is re-used by all forms of attention.Significance StatementCues about important features or locations in visual space are similar from the perspective of visual cortex, both allow relevant sensory representations to be enhanced while irrelevant ones can be ignored. Here we studied these attentional cues in an estimation task designed to separate different computational mechanisms of attention. Despite cueing observers in three different ways, to spatial locations, colors, or motion directions, we found that all cues led to similar perceptual improvements. Our results provide behavioral evidence supporting the idea that all forms of attention can be reconciled as a single repeated computational motif, re-implemented by the brain in different neural architectures for many different visual features.


1974 ◽  
Vol 26 (3) ◽  
pp. 503-513 ◽  
Author(s):  
Graham J. Hitch

Two experiments examined the probed recall of visually presented letter sequences in which the items appeared at different spatial locations. Three types of probe were compared: (1) spatial position; (2) temporal association; and (3) combined position and association. In the first experiment, in which the spatial locations of the items were correlated with their temporal order, spatial probes were more effective than temporal association probes. In the second experiment spatial location was uncorrelated with temporal order, and spatial probes were less effective than temporal association probes. Regardless of the probe, errors tended to be items presented close in time to correct responses: spatial proximity was far less important. The results are discussed in terms of a storage system in which items and their spatial locations are organized within a temporal format. Both experiments showed superior combined probe performance, demonstrating that short-term retrieval is not limited to the use of one type of cue at a time. Secondary aspects of the results showed additionally that subjects can “edit” their responses to avoid making obvious mistakes, and that spatial location can be partially forgotten rather than being completely lost.


2011 ◽  
Vol 17 (2) ◽  
pp. 289-294 ◽  
Author(s):  
Vincent Dru ◽  
Joël Cretenet

AbstractIn recent work, we showed that the judgment of affective stimuli is influenced by the degree of congruence between apparently innate hemispheric dispositions (left hemisphere positive and approach, right hemisphere negative and avoidance), and the type of movement produced by the contralateral arm (flexion-approach; extension-avoidance). Incongruent movements (e.g., right arm extension) were associated with attenuation of affective valuations. In the present study, we replicated these results. We also assessed confidence in judgments as a function of stimulus valence and congruence and determined that confidence is maximal with congruent movements and highly positive or negative stimuli, suggesting that congruence effects on affective valuation could be mediated by confidence effects. However, in a second experiment, involving judgments regarding segmented lines, congruence effects were observed only for bisected lines, for which confidence was lowest. Thus, confidence does not provide a unifying explanation for congruence effects in the performance of these two tasks. (JINS, 2011, 17, 289–294)


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 220-220
Author(s):  
C Lafosse ◽  
M F Westerhuis ◽  
E Vandenbussche

Visual attention can be allocated to a location in visual space and/or to a representation of an object in the visual field, independently of their spatial location. In Posner's cueing paradigm, it is assumed that attention is moved to and then engaged at a cued location. If the target appears at an uncued location, attention first has to be disengaged from the cued location before moving to and engaging the target location. On the basis of this paradigm, we designed an experiment to measure the disengaging of attention from objects, independently of location. For this purpose we used the bistable Necker cube that can be perceived as two different object configurations, depending on the position of the front side of the cube (lower left or upper right). The subject was instructed to react when he perceived the Necker cube as a previously presented model configuration, ie a stable cube. Each condition started with a bistable cube (with equal luminance of the ribs) that gradually evolved into the model configuration, by manipulating the luminance of the ribs of the cube. Prior to this the subject was cued by a cube similar to the model in the valid condition and cued by a dissimilar cube in the invalid condition. The results showed a significant difference (F1,10=7.35; p<0.05) between the valid and invalid cue condition, indicating a significant cost for the invalid cue condition. This effect is in accord with the well-known set effect found previously with bistable figures. The difference between the valid and the invalid cue conditions will be interpreted as disengaging attention from an object. Thus, object-based components of attention can be examined by paradigms similar to Posner's paradigm for location-based attention.


Sign in / Sign up

Export Citation Format

Share Document