shape representations
Recently Published Documents


TOTAL DOCUMENTS

119
(FIVE YEARS 32)

H-INDEX

15
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Lila Caimari

This Element examines urban imaginaries during the expansion of international news between the late nineteenth and the early twentieth centuries, when everyday information about faraway places found its way into newspapers all over the world. Building on the premise that news carried an unprecedented power to shape representations of the world, it follows this development as it made its way to regular readers beyond the dominant information poles, in the great port-cities of the South American Atlantic. Based on five case studies of typical turn-of-the-century foreign news, Lila Caimari shows how current events opened windows onto distant cities, feeding a new world horizon that was at once wider and eminently urban.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jarrod Hollis ◽  
Glyn W. Humphreys ◽  
Peter M. Allen

Evidence is presented for intermediate, wholistic visual representations of objects and non-objects that are computed online and independent of visual attention. Short-term visual priming was examined between visually similar shapes, with targets either falling at the (valid) location cued by primes or at another (invalid) location. Object decision latencies were facilitated when the overall shapes of the stimuli were similar irrespective of whether the location of the prime was valid or invalid, with the effects being equally large for object and non-object targets. In addition, the effects were based on the overall outlines of the stimuli and low spatial frequency components, not on local parts. In conclusion, wholistic shape representations based on outline form, are rapidly computed online during object recognition. Moreover, activation of common wholistic shape representations prime the processing of subsequent objects and non-objects irrespective of whether they appear at attended or unattended locations. Rapid derivation of wholistic form provides a key intermediate stage of object recognition.


2021 ◽  
Author(s):  
Gaurav Malhotra ◽  
Marin Dujmovic ◽  
John Hummel ◽  
Jeffrey S Bowers

The success of Convolutional Neural Networks (CNNs) in classifying objects has led to a surge of interest in using these systems to understand human vision. Recent studies have argued that when CNNs are trained in the correct learning environment, they can emulate a key property of human vision -- learning to classify objects based on their shape. While showing a shape-bias is indeed a desirable property for any model of human object recognition, it is unclear whether the resulting shape representations learned by these networks are human-like. We explored this question in the context of a well-known observation from psychology showing that humans encode the shape of objects in terms of relations between object features. To check whether this is also true for the representations of CNNs, we ran a series of simulations where we trained CNNs on datasets of novel shapes and tested them on a set of controlled deformations of these shapes. We found that CNNs do not show any enhanced sensitivity to deformations which alter relations between features, even when explicitly trained on such deformations. This behaviour contrasted with human participants in previous studies as well as in a new experiment. We argue that these results are a consequence of a fundamental difference between how humans and CNNs learn to recognise objects: while CNNs select features that allow them to optimally classify the proximal stimulus, humans select features that they infer to be properties of the distal stimulus. This makes human representations more generalisable to novel contexts and tasks.


2021 ◽  
Vol 188 ◽  
pp. 246-250
Author(s):  
K.C. Hartstein ◽  
S. Saleki ◽  
K. Ziman ◽  
P. Cavanagh ◽  
P.U. Tse

2021 ◽  
Vol 40 (5) ◽  
pp. 1-18
Author(s):  
Andreas Bærentzen ◽  
Eva Rotenberg

We propose a new algorithm for curve skeleton computation that differs from previous algorithms by being based on the notion of local separators . The main benefits of this approach are that it is able to capture relatively fine details and that it works robustly on a range of shape representations. Specifically, our method works on shape representations that can be construed as spatially embedded graphs. Such representations include meshes, volumetric shapes, and graphs computed from point clouds. We describe a simple pipeline where geometric data are initially converted to a graph, optionally simplified, local separators are computed and selected, and finally a skeleton is constructed. We test our pipeline on polygonal meshes, volumetric shapes, and point clouds. Finally, we compare our results to other methods for skeletonization according to performance and quality.


2021 ◽  
Author(s):  
Mirela T. Cazzolato ◽  
Lucas C. Scabora ◽  
Guilherme F. Zabot ◽  
Marco A. Gutierrez ◽  
Caetano Traina Jr. ◽  
...  

In this paper, we present FeatSet, a compilation of visual features extracted from open image datasets reported in the literature. FeatSet has a collection of 11 visual features, consisting of color, texture, and shape representations of the images acquired from 13 datasets. We organized the available features in a standard collection, including the available metadata and labels, when available. We also provide a description of the domain of each dataset included in our collection, with visual analysis using Multidimensional Scaling (MDS) and Principal Components Analysis (PCA) methods. FeatSet is recommended for supervised and non-supervised learning, also widely supporting Content-Based Image Retrieval (CBIR) applications and complex data indexing using Metric Access Methods (MAMs).


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Sreevani Katabathula ◽  
Qinyong Wang ◽  
Rong Xu

Abstract Background Alzheimer’s disease (AD) is a progressive and irreversible brain disorder. Hippocampus is one of the involved regions and its atrophy is a widely used biomarker for AD diagnosis. We have recently developed DenseCNN, a lightweight 3D deep convolutional network model, for AD classification based on hippocampus magnetic resonance imaging (MRI) segments. In addition to the visual features of the hippocampus segments, the global shape representations of the hippocampus are also important for AD diagnosis. In this study, we propose DenseCNN2, a deep convolutional network model for AD classification by incorporating global shape representations along with hippocampus segmentations. Methods The data was obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) and was T1-weighted structural MRI from initial screening or baseline, including ADNI 1,2/GO and 3. DenseCNN2 was trained and evaluated with 326 AD subjects and 607 CN hippocampus MRI using 5-fold cross-validation strategy. DenseCNN2 was compared with other state-of-the-art machine learning approaches for the task of AD classification. Results We showed that DenseCNN2 with combined visual and global shape features performed better than deep learning models with visual or global shape features alone. DenseCNN2 achieved an average accuracy of 0.925, sensitivity of 0.882, specificity of 0.949, and area under curve (AUC) of 0.978, which are better than or comparable to the state-of-the-art methods in AD classification. Data visualization analysis through 2D embedding of UMAP confirmed that global shape features improved class discrimination between AD and normal. Conclusion DenseCNN2, a lightweight 3D deep convolutional network model based on combined hippocampus segmentations and global shape features, achieved high performance and has potential as an efficient diagnostic tool for AD classification.


2021 ◽  
Author(s):  
Ye Mei

With the increasing number of available digital images, there is an urgent need of image content description to facilitate content based image retrieval (CBIR). Besides colour and texture, shape is an important low level feature in describing image content. An object can be photographed from different distances and angles. However, we often want to classify the images of the same object into one class, despite the change of perspective. So, it is desired to extract shape features that are invariant to the change of perspective. The shape of an object from one viewpoint to another can be linked through an affine transformation, if it is viewed from a much larger distance than its size along the line of sight. Those invariant shape features are known as affine invariant shape representations. Because of the change of perspective, it is more difficult to develop affine invariant shape representations than normal ones. The goal of this work is to develop affine invariant shape descriptors. Through shape retrieval experiments, we find that the performance of the existing affine invariant shape representations are not satisfactory. Especially, when the shape boundary is corrupted by noise, their performance degrades quickly. In this work, two new affine invariant contour-based shape descriptors, the ICA Fourier shape descriptor (ICAFSD) and the whitening Fourier shape descriptor (WFSD) have been developed. They perform better than most of the existing affine invariant shape representations, while having compact feature size and low computational time requirement. Four region-based affine-invariant shape descriptors, the ICA Zernike moment shape descriptor (ICAZMSD), the whitening Zernike moment shape descriptor (WZMSD), the ICA orthogonal Fourier Mellin moment shape descriptor (ICAOFMMSD), and the whitening orthogonal Fourier Mellin moment shape descriptor (WOFMMSD), are also proposed, in this work. They can be applied to both simple and complex shapes, and have close to perfect performance in retrieval experiments. The advantage of those newly proposed shape descriptors is even more apparent in experiments on shapes with added boundary noise: Their performance does not deteriorate as much as the existing ones.


Sign in / Sign up

Export Citation Format

Share Document