object location
Recently Published Documents


TOTAL DOCUMENTS

547
(FIVE YEARS 96)

H-INDEX

44
(FIVE YEARS 4)

2021 ◽  
pp. 142-147
Author(s):  
T. V. Savaryn ◽  
I. A. Prokop ◽  
O. A. Makovska

The article addresses the issue of studying noun and adjective synonymy in the Latin anatomical terminology. Different views on the problem of noun and adjective synonymic relations in the anatomical terminology are considered.  The complex of Latin nouns-synonyms, their signs and functional specifics are described. Latin terminological units are divided into three groups:1) absolute synonyms; 2) synonyms of varying compatibility; 3) quasi-synonyms. Group 1 includes absolute synonyms which have appeared due to the revision of anatomical nomenclature and are of the similar semantic meaning. Group 2 – nouns, often terminological pairs, having different compatibility in the anatomical terminology. The most numerous Group 3 includes the so-called quasi-synonyms – terms of similar meaning intended for differentiation of various anatomical notions. Meanwhile, it has been found that the signs of Latin quasi-synonyms differentiation in the anatomical terminology may vary greatly and contain an indication on the shape of an object, type of tissue, morphological similarity, object location, etc. Most often used synonymic adjectives are analysed. They have been found to belong to Group 2 of the classification above, since the choice of the term-adjective most commonly depends on the compatibility, that is on the noun it is related to.  


2021 ◽  
Author(s):  
Vladislava Segen

The current study investigated a systematic bias in spatial memory in which people, following a perspective shift from encoding to recall, indicated the location of an object further to the direction of the shit. In Experiment 1, we documented this bias by asking participants to encode the position of an object in a virtual room and then indicate it from memory following a perspective shift induced by camera translation and rotation. In Experiment 2, we decoupled the influence of camera translations and camera rotations and examined also whether adding more information in the scene would reduce the bias. We also investigated the presence of age-related differences in the precision of object location estimates and the tendency to display the bias related to perspective shift. Overall, our results showed that camera translations led to greater systematic bias than camera rotations. Furthermore, the use of additional spatial information improved the precision with which object locations were estimated and reduced the bias associated with camera translation. Finally, we found that although older adults were as precise as younger participants when estimating object locations, they benefited less from additional spatial information and their responses were more biased in the direction of camera translations. We propose that accurate representation of camera translations requires more demanding mental computations than camera rotations, leading to greater uncertainty about the position of an object in memory. This uncertainty causes people to rely on an egocentric anchor thereby giving rise to the systematic bias in the direction of camera translation.


2021 ◽  
Vol 11 (11) ◽  
pp. 1542
Author(s):  
Natalia Ladyka-Wojcik ◽  
Rosanna K. Olsen ◽  
Jennifer D. Ryan ◽  
Morgan D. Barense

In memory, representations of spatial features are stored in different reference frames; features relative to our position are stored egocentrically and features relative to each other are stored allocentrically. Accessing these representations engages many cognitive and neural resources, and so is susceptible to age-related breakdown. Yet, recent findings on the heterogeneity of cognitive function and spatial ability in healthy older adults suggest that aging may not uniformly impact the flexible use of spatial representations. These factors have yet to be explored in a precisely controlled task that explicitly manipulates spatial frames of reference across learning and retrieval. We used a lab-based virtual reality task to investigate the relationship between object–location memory across frames of reference, cognitive status, and self-reported spatial ability. Memory error was measured using Euclidean distance from studied object locations to participants’ responses at testing. Older adults recalled object locations less accurately when they switched between frames of reference from learning to testing, compared with when they remained in the same frame of reference. They also showed an allocentric learning advantage, producing less error when switching from an allocentric to an egocentric frame of reference, compared with the reverse direction of switching. Higher MoCA scores and better self-assessed spatial ability predicted less memory error, especially when learning occurred egocentrically. We suggest that egocentric learning deficits are driven by difficulty in binding multiple viewpoints into a coherent representation. Finally, we highlight the heterogeneity of spatial memory performance in healthy older adults as a potential cognitive marker for neurodegeneration, beyond normal aging.


2021 ◽  
Vol 11 (10) ◽  
pp. 1350
Author(s):  
Jackie M. Poos ◽  
Ineke J. M. van der van der Ham ◽  
Anna E. Leeuwis ◽  
Yolande A. L. Pijnenburg ◽  
Wiesje M. van der van der Flier ◽  
...  

Background: Impairment in navigation abilities and object location memory are often seen in early-stage Alzheimer’s Disease (AD), yet these constructs are not included in standard neuropsychological assessment. We investigated the differential ability of a short digital spatial memory test in mild AD dementia and mild cognitive impairment (MCI). Methods: 21 patients with AD dementia (66.9 ± 6.9; 47% female), 22 patients with MCI (69.6 ± 8.3; 46% female) and 21 patients with subjective cognitive decline (SCD) (62.2 ± 8.9; 48% female) from the Amsterdam Dementia Cohort performed the Object Location Memory Test (OLMT), consisting of a visual perception and memory trial, and the Virtual Tübingen (VT) test, consisting of a scene recognition, route continuation, route ordering and distance comparison task. The correlations with other cognitive domains were examined. Results: Patients with mild AD dementia (Z: −2.51 ± 1.15) and MCI (Z: −1.81 ± 0.92) performed worse than participants with SCD (Z: 0.0 ± 1.0) on the OLMT. Scene recognition and route continuation were equally impaired in patients with AD dementia (Z: −1.14 ± 0.73; Z: −1.44 ± 1.13) and MCI (Z: −1.37 ± 1.25; Z: −1.21 ± 1.07). Route ordering was only impaired in patients with MCI (Z: −0.82 ± 0.78). Weak to moderate correlations were found between route continuation and memory (r(64) = 0.40, p < 0.01), and between route ordering and attention (r(64) = 0.33, p < 0.01), but not for the OLMT. Conclusion: A short digital spatial memory test battery was able to detect object location memory and navigation impairment in patients with mild AD dementia and MCI, highlighting the value of incorporating such a test battery in standard neuropsychological assessment.


2021 ◽  
Vol 21 (9) ◽  
pp. 2169
Author(s):  
Nikita Mikhalev ◽  
Yuri Markov

Photonics ◽  
2021 ◽  
Vol 8 (9) ◽  
pp. 400
Author(s):  
Zhe Yang ◽  
Yu-Ming Bai ◽  
Li-Da Sun ◽  
Ke-Xin Huang ◽  
Jun Liu ◽  
...  

We propose a concurrent single-pixel imaging, object location, and classification scheme based on deep learning (SP-ILC). We used multitask learning, developed a new loss function, and created a dataset suitable for this project. The dataset consists of scenes that contain different numbers of possibly overlapping objects of various sizes. The results we obtained show that SP-ILC runs concurrent processes to locate objects in a scene with a high degree of precision in order to produce high quality single-pixel images of the objects, and to accurately classify objects, all with a low sampling rate. SP-ILC has potential for effective use in remote sensing, medical diagnosis and treatment, security, and autonomous vehicle control.


2021 ◽  
Vol 11 (18) ◽  
pp. 8354
Author(s):  
Raymond Ian Osolo ◽  
Zhan Yang ◽  
Jun Long

Many vision–language models that output natural language, such as image-captioning models, usually use image features merely for grounding the captions and most of the good performance of the model can be attributed to the language model, which does all the heavy lifting, a phenomenon that has persisted even with the emergence of transformer-based architectures as the preferred base architecture of recent state-of-the-art vision–language models. In this paper, we make the images matter more by using fast Fourier transforms to further breakdown the input features and extract more of their intrinsic salient information, resulting in more detailed yet concise captions. This is achieved by performing a 1D Fourier transformation on the image features first in the hidden dimension and then in the sequence dimension. These extracted features alongside the region proposal image features result in a richer image representation that can then be queried to produce the associated captions, which showcase a deeper understanding of image–object–location relationships than similar models. Extensive experiments performed on the MSCOCO dataset demonstrate a CIDER-D, BLEU-1, and BLEU-4 score of 130, 80.5, and 39, respectively, on the MSCOCO benchmark dataset.


2021 ◽  
Vol 70 (9) ◽  
pp. 8682-8691
Author(s):  
Ben Miethig ◽  
Yixin Huangfu ◽  
Jiahong Dong ◽  
Jimi Tjong ◽  
Martin Von Mohrenschildt ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document