scholarly journals A Fuzzy Spatial Region Extraction Model for Object’s Vague Location Description from Observer Perspective

2020 ◽  
Vol 9 (12) ◽  
pp. 703
Author(s):  
Jun Xu ◽  
Xin Pan

Descriptions of the spatial locations of disappeared objects are often recorded in eyewitness records, travel notes, and historical documents. However, in geographic information system (GIS), the observer-centered and vague nature of the descriptions causes difficulties in representing the spatial characters of these objects. To address this problem, this paper proposes a Fuzzy Spatial Region Extraction Model for Object’s Vague Location Description from Observer Perspective (FSREM-OP). In this model, the spatial relationship between the observer and the object are represented in spatial knowledge. It is composed of “phrase” and “region”. Based on the spatial knowledge, three components of spatial inference are constructed: Spatial Entities (SEs), Fuzzy Spatial Regions (FSRs), and Spatial Actions (SAs). Through spatial knowledge and the components of FSREM-OP, an object’s location can be inferred from an observer’s describing text, transforming the vagueness and subjectivity of location description into fuzzy spatial regions in the GIS. The FSREM-OP was tested by constructing a group of observers, object position relationships and vague descriptions. The results show that it is capable of extracting the spatial information and presenting location descriptions in the GIS, despite the vagueness and subjective spatial relation expressions in the descriptions.

2021 ◽  
Vol 10 (12) ◽  
pp. 833
Author(s):  
Jun Xu ◽  
Xin Pan ◽  
Jian Zhao ◽  
Haohai Fu

Many documents contain vague location descriptions of observed objects. To represent location information in geographic information systems (GISs), these vague location descriptions need to be transformed into representable fuzzy spatial regions, and knowledge about the location descriptions of observer-to-object spatial relations must serve as the basis for this transformation process. However, a location description from the observer perspective is not a specific fuzzy function, but comes from a subjective viewpoint, which will be different for different individuals, making the corresponding knowledge difficult to represent or obtain. To extract spatial knowledge from such subjective descriptions, this research proposes a virtual reality (VR)-based fuzzy spatial relation knowledge extraction method for observer-centered vague location descriptions (VR-FSRKE). In VR-FSRKE, a VR scene is constructed, and users can interactively determine the fuzzy region corresponding to a location description under the simulated VR observer perspective. Then, a spatial region clustering mechanism is established to summarize the fuzzy regions identified by various individuals into fuzzy spatial relation knowledge. Experiments show that, on the basis of interactive scenes provided through VR, VR-FSRKE can efficiently extract spatial relation knowledge from many individuals and is not restricted by requirements of a certain place or time; furthermore, the knowledge obtained by VR-FSRKE is close to the knowledge obtained from a real scene.


2021 ◽  
Vol 13 (3) ◽  
pp. 1334
Author(s):  
Denis Maragno ◽  
Carlo Federico dall’Omo ◽  
Gianfranco Pozzer ◽  
Francesco Musco

Climate change risk reduction requires cities to undertake urgent decisions. One of the principal obstacles that hinders effective decision making is insufficient spatial knowledge frameworks. Cities climate adaptation planning must become strategic to rethink and transform urban fabrics holistically. Contemporary urban planning should merge future threats with older and unsolved criticalities, like social inequities, urban conflicts and “drosscapes”. Retrofitting planning processes and redefining urban objectives requires the development of innovative spatial information frameworks. This paper proposes a combination of approaches to overcome knowledge production limits and to support climate adaptation planning. The research was undertaken in collaboration with the Metropolitan City of Venice and the Municipality of Venice, and required the production of a multi-risk climate atlas to support their future spatial planning efforts. The developed tool is a Spatial Decision Support System (SDSS), which aids adaptation actions and the coordination of strategies. The model recognises and assesses two climate impacts: Urban Heat Island and Flooding, representing the Metropolitan City of Venice (CMVE) as a case study in complexity. The model is composed from multiple assessment methodologies and maps both vulnerability and risk. The atlas links the morphological and functional conditions of urban fabrics and land use that triggers climate impacts. The atlas takes the exposure assessment of urban assets into account, using this parameter to describe local economies and social services, and map the uneven distribution of impacts. The resulting tool is therefore a replicable and scalable mapping assessment able to mediate between metropolitan and local level planning systems.


2002 ◽  
Vol 5 (1) ◽  
pp. 3-26 ◽  
Author(s):  
Karen Emmorey ◽  
Barbara Tversky

Two studies investigated the ramifications of encoding spatial locations via signing space for perspective choice in American Sign Language. Deaf signers (“speakers”) described the location of one of two identical objects either to a present addressee or to a remote addressee via a video monitor. Unlike what has been found for English speakers, ASL signers did not adopt their addressee’s spatial perspective when describing locations in a jointly viewed present environment; rather, they produced spatial descriptions utilizing shared space in which classifier and deictic signs were articulated at locations in signing space that schematically mapped to both the speaker’s and addressee’s view of object locations within the (imagined) environment. When the speaker and addressee were not jointly viewing the environment, speakers either adopted their addressee’s perspective via referential shift (i.e. locations in signing space were described as if the speaker were the addressee) or speakers expressed locations from their own perspective by describing locations from their view of a map of the environment and the addressee’s position within that environment. The results highlight crucial distinctions between the nature of perspective choice in signed languages in which signing space is used to convey spatial information and spoken languages in which spatial information is conveyed by lexical spatial terms. English speakers predominantly reduce their addressee’s cognitive load by adopting their addressee’s perspective, whereas in ASL shared space can be used (there is no true addressee or speaker perspective) and in other contexts, reversing speaker perspective is common in ASL and does not increase the addressee’s cognitive load.


2016 ◽  
Vol 2 (8) ◽  
pp. e1501070 ◽  
Author(s):  
Liu Zhou ◽  
Teng Leng Ooi ◽  
Zijiang J. He

Our sense of vision reliably directs and guides our everyday actions, such as reaching and walking. This ability is especially fascinating because the optical images of natural scenes that project into our eyes are insufficient to adequately form a perceptual space. It has been proposed that the brain makes up for this inadequacy by using its intrinsic spatial knowledge. However, it is unclear what constitutes intrinsic spatial knowledge and how it is acquired. We investigated this question and showed evidence of an ecological basis, which uses the statistical spatial relationship between the observer and the terrestrial environment, namely, the ground surface. We found that in dark and reduced-cue environments where intrinsic knowledge has a greater contribution, perceived target location is more accurate when referenced to the ground than to the ceiling. Furthermore, taller observers more accurately localized the target. Superior performance was also observed in the full-cue environment, even when we compensated for the observers’ heights by having the taller observer sit on a chair and the shorter observers stand on a box. Although fascinating, this finding dovetails with the prediction of the ecological hypothesis for intrinsic spatial knowledge. It suggests that an individual’s accumulated lifetime experiences of being tall and his or her constant interactions with ground-based objects not only determine intrinsic spatial knowledge but also endow him or her with an advantage in spatial ability in the intermediate distance range.


Author(s):  
Zhizhong Han ◽  
Xiyang Wang ◽  
Chi Man Vong ◽  
Yu-Shen Liu ◽  
Matthias Zwicker ◽  
...  

Learning global features by aggregating information over multiple views has been shown to be effective for 3D shape analysis. For view aggregation in deep learning models, pooling has been applied extensively. However, pooling leads to a loss of the content within views, and the spatial relationship among views, which limits the discriminability of learned features. We propose 3DViewGraph to resolve this issue, which learns 3D global features by more effectively aggregating unordered views with attention. Specifically, unordered views taken around a shape are regarded as view nodes on a view graph. 3DViewGraph first learns a novel latent semantic mapping to project low-level view features into meaningful latent semantic embeddings in a lower dimensional space, which is spanned by latent semantic patterns. Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns. Finally, all spatial pattern correlations are integrated with attention weights learned by a novel attention mechanism. This further increases the discriminability of learned features by highlighting the unordered view nodes with distinctive characteristics and depressing the ones with appearance ambiguity. We show that 3DViewGraph outperforms state-of-the-art methods under three large-scale benchmarks.


2017 ◽  
Author(s):  
M. Murugan ◽  
M. Park ◽  
J. Taliaferro ◽  
H.J. Jang ◽  
J. Cox ◽  
...  

Social interactions are crucial to the survival and well-being of all mammals, including humans. Although the prelimbic cortex (PL, part of medial prefrontal cortex) has been implicated in social behavior, it is not clear which neurons are relevant, nor how they contribute. We found that the PL contains anatomically and molecularly distinct subpopulations of neurons that target 3 downstream regions that have been implicated in social behavior: the nucleus accumbens (NAc), the amygdala, and the ventral tegmental area. Activation of NAc-projecting PL neurons (PL-NAc), but not the other subpopulations, decreased preference for a social target, suggesting an unique contribution of this population to social behavior. To determine what information PL-NAc neurons convey, we recorded selectively from them, and found that individual neurons were active during social investigation, but only in specific spatial locations. Spatially-specific inhibition of these neurons prevented the formation of a social-spatial association at the inhibited location. In contrast, spatially nonspecific inhibition did not affect social behavior. Thus, the unexpected combination of social and spatial information within the PL-NAc population appears to support socially motivated behavior by enabling the formation of social-spatial associations.


2018 ◽  
Vol 84 (3) ◽  
pp. 330-343 ◽  
Author(s):  
Konstantinos Papadopoulos ◽  
Marialena Barouti ◽  
Eleni Koustriava

To examine how individuals with visual impairments understand space and the way they develop cognitive maps, we studied the differences in cognitive maps resulting from different methods and tools for spatial coding in large geographical spaces. We examined the ability of 21 blind individuals to create cognitive maps of routes in unfamiliar areas using (a) audiotactile maps, (b) tactile maps, and (c) direct experience of movement along the routes. We also compared participants’ cognitive maps created with the use of audiotactile maps, tactile maps, and independent movement along the routes with regard to their precision (i.e., the correctness or incorrectness of spatial information location) and inclusiveness (i.e., the amount of spatial information included correctly in the cognitive map). The results of the experimental trials demonstrated that becoming familiar with an area is easier for blind individuals when they use a tactile aide, such as an audiotactile map, as compared with walking along the route.


2007 ◽  
Vol 12 (2) ◽  
pp. 45-59 ◽  
Author(s):  
Heidi Bailey ◽  
David Smaldone ◽  
Gregory Elmes ◽  
Robert Burns

Interpretive centers are well-known sources of geographic information—providing visitors with maps and facts about noteworthy places. Yet research on the effectiveness of interpretation in conveying geographic information is limited. Managing natural and cultural resources creates a need to communicate to the public about these places at both small and large scales. This raises the question of how people perceive different types of spaces and how they learn geographic and spatial information. This paper reviews the literature on spatial cognition, providing a theoretical and empirical basis to suggest strategies for interpretation. The recommendations of this paper are to: 1) design geographic interpretation around the three components of spatial knowledge; 2) create interpretive maps by blending the principles of map and exhibit design; and 3) provide visitors with multiple opportunities to learn about a geographic setting. Maps have considerable potential as tools for connecting visitors to the meaning of places.


2013 ◽  
Vol 709 ◽  
pp. 567-570
Author(s):  
Ting Ting Zhang ◽  
Ke Yan Xiao

Three-dimensional modelling technology was applied in this paper to construct models of complex geological body based on borehole data. The 3-D model of the rock, the orebody, the stratum, the fault and so on can be used to display the spatial form of these geological bodies and their spatial relationship, and to predict the trend of the deep ones. It brings geological working a new and multi-dimensional way compared with the traditional methods of confining 3-D information into2-D plane that losing the spatial information. This paper takes a mine area of China as an example to introduce the methods and workflow of the model making and the results of modelling are expressed based on the computer visualization technology.


1999 ◽  
Vol 8 (6) ◽  
pp. 671-685 ◽  
Author(s):  
Jui Lin Chen ◽  
Kay M. Stanney

This paper proposes a theoretical model of wayfinding that can be used to guide the design of navigational aiding in virtual environments. Based on an evaluation of wayfinding studies in natural environments, this model divides the wayfinding process into three main subprocesses: cognitive mapping, wayfinding plan development, and physical movement or navigation through an environment. While this general subdivision has been proposed before, the current model further delineates the wayfinding process, including the distinct influences of spatial information, spatial orientation, and spatial knowledge. The influences of experience, abilities, search strategies, motivation, and environmental layout on the wayfinding process are also considered. With this specification of the wayfinding process, a taxonomy of navigational tools is then proposed that can be used to systematically aid the specified wayfinding subprocesses. If effectively applied to the design of a virtual environment, the use of such tools should lead to reduced disorientation and enhanced wayfinding in large-scale virtual spaces. It is also suggested that, in some cases, this enhanced wayfinding performance may be at the expense of the acquisition of an accurate cognitive map of the virtual environment being traversed.


Sign in / Sign up

Export Citation Format

Share Document