Within- and Between-Hands Simon Effects for Object Locations and Graspable Components

2007 ◽  
Author(s):  
Dongbin Cho ◽  
Robert W. Proctor
Keyword(s):  
Author(s):  
Gabriel H Greve ◽  
Kenneth M Hopkinson ◽  
Gary B Lamont

The congested exosphere continues to contain more satellites and debris, raising the potential for destructive collisions. The Special Perturbations (SP) Tasker algorithm currently assigns the ground sensors tasks to track object locations. Accurate locations help avoid collisions. However, the SP Tasker ignores priority, which is the satellite’s importance factor. This article introduces the Evolutionary Algorithm Tasker (EAT) to solve the Satellite Sensor Allocation Problem (SSAP), which is a hybrid Evolutionary Strategy and Genetic Algorithm concept including specific techniques to explore the solution space and exploit the best solutions found. This approach goes beyond the current method, which does not include priority and other methods from the literature that have been applied to small-scale simulations. The SSAP model implementation extends Multi-Objective Evolutionary Algorithms (MOEAs) from the literature while accounting for priorities. Multiple real-world factors are considered, including each sensor’s field-of-view, the orbital opportunities to track a satellite, the capacity of the sensor, and the relative priority of the satellites. The single objective EAT is statistically compared to the SP Tasker algorithm. Simulations show that both the EAT and MOEA approaches effectively use priority in the core tasking algorithms to ensure that higher priority satellites are tracked.


2002 ◽  
Vol 5 (1) ◽  
pp. 3-26 ◽  
Author(s):  
Karen Emmorey ◽  
Barbara Tversky

Two studies investigated the ramifications of encoding spatial locations via signing space for perspective choice in American Sign Language. Deaf signers (“speakers”) described the location of one of two identical objects either to a present addressee or to a remote addressee via a video monitor. Unlike what has been found for English speakers, ASL signers did not adopt their addressee’s spatial perspective when describing locations in a jointly viewed present environment; rather, they produced spatial descriptions utilizing shared space in which classifier and deictic signs were articulated at locations in signing space that schematically mapped to both the speaker’s and addressee’s view of object locations within the (imagined) environment. When the speaker and addressee were not jointly viewing the environment, speakers either adopted their addressee’s perspective via referential shift (i.e. locations in signing space were described as if the speaker were the addressee) or speakers expressed locations from their own perspective by describing locations from their view of a map of the environment and the addressee’s position within that environment. The results highlight crucial distinctions between the nature of perspective choice in signed languages in which signing space is used to convey spatial information and spoken languages in which spatial information is conveyed by lexical spatial terms. English speakers predominantly reduce their addressee’s cognitive load by adopting their addressee’s perspective, whereas in ASL shared space can be used (there is no true addressee or speaker perspective) and in other contexts, reversing speaker perspective is common in ASL and does not increase the addressee’s cognitive load.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Lars Kuchinke ◽  
Julian Keil ◽  
Dennis Edler ◽  
Anne-Kathrin Bestgen ◽  
Frank Dickmann

<p><strong>Abstract.</strong> Reading spatial information from topographic maps to form mental representations that guide spatial orientation and navigation is a rather complex cognitive process. Perceptual and knowledge-driven processes interact to support the map reader in building these mental representations. The resulting cognitive maps are not one-to-one mappings of the spatial information and known to be distorted systematically. It is assumed that spatial information is hierarchically organized in these mental models. We are interested in how map design based on cognitive principles supports memory formation and leads to less distorted mental representations.</p><p>Based on the results of empirical studies we are able to show that overlaid grids in these maps address the hierarchical nature of these mental representations of map space. When map users are asked to learn object locations in a map the availability of overlaid grid layers improve object location memory. This effect is independent of the shape of these grid patterns (square grids or hexagonal grids) and, moreover, can be shown to be effective even in situations where the grids are interrupted by other maps layers (i.e. so-called illusory grids).</p><p>These results seem best explained by the formation of less distorted mental representations based on the availability of superordinate hierarchical information and the application of Gestalt principles by the map user. Thus again, point to the interaction between perceptual and knowledge-driven processes in the formation of these mental representations of map space. This assumption receives further support by eye-tracking data that reveal that grids do not only attract attention towards their own location but also seem to structure the gaze patterns in relation to the relevant object locations that are not necessarily located close to a grid line.</p>


2021 ◽  
Author(s):  
Vladislava Segen

The current study investigated a systematic bias in spatial memory in which people, following a perspective shift from encoding to recall, indicated the location of an object further to the direction of the shit. In Experiment 1, we documented this bias by asking participants to encode the position of an object in a virtual room and then indicate it from memory following a perspective shift induced by camera translation and rotation. In Experiment 2, we decoupled the influence of camera translations and camera rotations and examined also whether adding more information in the scene would reduce the bias. We also investigated the presence of age-related differences in the precision of object location estimates and the tendency to display the bias related to perspective shift. Overall, our results showed that camera translations led to greater systematic bias than camera rotations. Furthermore, the use of additional spatial information improved the precision with which object locations were estimated and reduced the bias associated with camera translation. Finally, we found that although older adults were as precise as younger participants when estimating object locations, they benefited less from additional spatial information and their responses were more biased in the direction of camera translations. We propose that accurate representation of camera translations requires more demanding mental computations than camera rotations, leading to greater uncertainty about the position of an object in memory. This uncertainty causes people to rely on an egocentric anchor thereby giving rise to the systematic bias in the direction of camera translation.


Author(s):  
Horst G. Brandes

The effectiveness of electromagnetic (EM), ground penetrating radar (GPR) and seismic refraction (SR) were evaluated by surveying a shallow trench in which a number of objects of varying composition and size were buried. The trench was excavated in granular calcareous fill material. An experienced geophysical contractor was asked to provide blind predictions of object locations using each of the techniques in turn. GPR with a 400 MHz antenna was the most successful, followed by SR and EM surveying. GPR and SR were also carried out at the port of Hilo to investigate complex subsurface conditions.


Memory ◽  
2019 ◽  
Vol 27 (10) ◽  
pp. 1371-1380 ◽  
Author(s):  
Inge Scheper ◽  
Ellen R. A. de Bruijn ◽  
Dirk Bertens ◽  
Roy P. C. Kessels ◽  
Inti A. Brazil

2012 ◽  
Vol 25 (0) ◽  
pp. 18
Author(s):  
Achille Pasqualotto

How do people remember the location of objects? Location is always relative, and thus depends on a reference frame. There are two types of reference frames: egocentric (or observer-based) and allocentric (or environmental-based). Here we investigated the reference frame people used to remember object locations in a large room. We also examined whether the choice of a given reference frame is dictated by visual experience. Thus we tested congenitally blind, late blind, and sighted blindfolded participants. Objects were organized in a structured configuration and then explored one-by-one with participants walking back and forth from a single point. After the exploration of the locations, a spatial memory test was conducted. The memory test required participants to imagine being inside the array of objects, being oriented along a given heading, and then pointing towards the required object. Crucially the headings were either aligned to the allocentric structure of the configuration, that is rows and columns, or aligned to the egocentric route walked during the exploration of the objects. The spatial representation used by the participants can be revealed by better performance when the imagined heading in the test matches the spatial representation used. We found that participants with visual experience, that is late blind and blindfolded sighted, were better with headings aligned to the allocentric structure of the configuration. On the contrary, congenitally blind were more accurate with headings aligned to the egocentric walked routes. This suggests that visual experience during early development determines a preference for an allocentric frame of reference.


Author(s):  
Tim Wortmann ◽  
Christian Dahmen ◽  
Sergej Fatikow

This article deals with the exploitation of magnetic susceptibility artifacts in magnetic resonance imaging (MRI) for the recognition of metallic delivery capsules. The targeted application is a closed-loop position control of magnetic objects implemented using the components of a clinical MRI scanner. Actuation can be performed by switching the magnetic gradient fields, whereas object locations are detected by an analysis of the MRI scans. A comprehensive investigation of susceptibility artifacts with a total number of 108 experimental setups has been performed in order to study scaling laws and the impact of object properties and imaging parameters. In addition to solid metal objects, a suspension of superparamagnetic nanoparticles has been examined. All 3D scans have been segmented automatically for artifact quantification and location determination. Analysis showed a characteristic shape for all three base types of sequences, which is invariant to the magnetic object shape and material. Imaging parameters such as echo time and flip angle have a moderate impact on the artifact volume but do not modify the characteristic artifact shape. The nanoparticle agglomerates produce imaging artifacts similar to the solid samples. Based on the results, a two-stage recognition/tracking procedure is proposed.


Sign in / Sign up

Export Citation Format

Share Document