scholarly journals Diagnostic parts are not exclusive in the search template for real-world object categories

2019 ◽  
Vol 196 ◽  
pp. 11-17
Author(s):  
Marcel Wurth ◽  
Reshanne R. Reeder
2015 ◽  
Vol 15 (12) ◽  
pp. 8
Author(s):  
Marius Catalin Iordan ◽  
Michelle Greene ◽  
Diane Beck ◽  
Li Fei-Fei

2017 ◽  
Vol 372 (1711) ◽  
pp. 20160055 ◽  
Author(s):  
Elizabeth M. Clerkin ◽  
Elizabeth Hart ◽  
James M. Rehg ◽  
Chen Yu ◽  
Linda B. Smith

We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’.


2016 ◽  
Vol 28 (9) ◽  
pp. 1392-1405 ◽  
Author(s):  
Sabrina Fagioli ◽  
Emiliano Macaluso

Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top–down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top–down and bottom–up signals in the presence of distractors during divided attention in real-world scenes.


2011 ◽  
Vol 23 (8) ◽  
pp. 2079-2101 ◽  
Author(s):  
James W. Lewis ◽  
William J. Talkington ◽  
Aina Puce ◽  
Lauren R. Engel ◽  
Chris Frum

In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.


2019 ◽  
Vol 19 (10) ◽  
pp. 309b
Author(s):  
Samantha D Lopez ◽  
Ashley M Ercolino ◽  
Joseph Schmidt
Keyword(s):  

2015 ◽  
Vol 144 (2) ◽  
pp. 264-273 ◽  
Author(s):  
Clayton Hickey ◽  
Daniel Kaiser ◽  
Marius V. Peelen
Keyword(s):  

2021 ◽  
Vol 21 (9) ◽  
pp. 2985
Author(s):  
Alexander N. Minos ◽  
Kayla M. Ferko ◽  
Stefan Köhler

2019 ◽  
Author(s):  
Yuri Markov ◽  
Igor Utochkin ◽  
Timothy F. Brady

When storing multiple objects in visual working memory, observers sometimes misattribute perceived features to incorrect locations or objects. These misattributions are called binding errors (or swaps) and have been previously demonstrated mostly in simple objects whose features are easy to encode independently and arbitrarily chosen, like colors and orientations. Here, we tested whether similar swaps can occur with real-world objects, where the connection between features is meaningful rather than arbitrary. In Experiments 1 and 2, observers were simultaneously shown four items from two object categories. Within a category, the two exemplars could be presented in either the same or different states (e.g., open/closed; full/empty). After a delay, both exemplars from one of the categories were probed, and participants had to recognize which exemplar went with which state. We found good memory for state information and exemplar information on their own, but a significant memory decrement for exemplar-state combinations, suggesting that binding was difficult for observers and “swap” errors occurred even for meaningful real-world objects. In Experiment 3, we used the same tasks, but on half of the trials, the locations of the exemplars were swapped at test. We found that participants ascribed incorrect states to exemplars more frequently when the locations of exemplars were swapped. We concluded that the internal features of real-world objects are not perfectly bound in working memory, and location updates impair the object representation. Overall, we provide evidence that even real-world objects are not stored in an entirely unitized format in working memory.


Sign in / Sign up

Export Citation Format

Share Document