perceptual categorization
Recently Published Documents


TOTAL DOCUMENTS

167
(FIVE YEARS 19)

H-INDEX

26
(FIVE YEARS 0)

Author(s):  
Emma L. Morgan ◽  
Mark K. Johansen

AbstractMaking property inferences for category instances is important and has been studied in two largely separate areas—categorical induction and perceptual categorization. Categorical induction has a corpus of well-established effects using complex, real-world categories; however, the representational basis of these effects is unclear. In contrast, the perceptual categorization paradigm has fostered the assessment of well-specified representation models due to its controlled stimuli and categories. In categorical induction, evaluations of premise typicality effects, stronger attribute generalization from typical category instances than from atypical, have tried to control the similarity between instances to be distinct from premise–conclusion similarity effects, stronger generalization from greater similarity. However, the extent to which similarity has been controlled is unclear for these complex stimuli. Our research embedded analogues of categorical induction effects in perceptual categories, notably premise typicality and premise conclusion similarity, in an attempt to clarify the category representation underlying feature inference. These experiments controlled similarity between instances using overlap of a small number of constrained features. Participants made inferences for test cases using displayed sets of category instances. The results showed typicality effects, premise–conclusion similarity effects, but no evidence of premise typicality effects (i.e., no preference for generalizing features from typical over atypical category instances when similarity was controlled for), with significant Bayesian support for the null. As typicality effects occurred and occur widely in the perceptual categorization paradigm, why was premise typicality absent? We discuss possible reasons. For attribute inference, is premise typicality distinct from instance similarity? These initial results suggest not.


2021 ◽  
Author(s):  
Chi Chen ◽  
Freddy Trinh ◽  
Nicol Harper ◽  
Livia de Hoz

AbstractAs we interact with our surroundings, we encounter the same or similar objects from different perspectives and are compelled to generalize. For example, we recognize dog barks as a distinct class of sound, despite the variety of individual barks. While we have some understanding of how generalization is done along a single stimulus dimension, such as frequency or color, natural stimuli are identifiable by a combination of dimensions. To understand perception, measuring the interaction across stimulus dimensions is essential. For example, when identifying a sound, does our brain focus on a specific dimension or a combination, such as its frequency and duration? Furthermore, does the relative relevance of each dimension reflect its contribution to the natural sensory environment? Using a 2-dimension discrimination task for mice we tested untrained generalization across several pairs of auditory dimensions. We uncovered a perceptual hierarchy over the tested dimensions that was dominated by the sound’s spectral composition. A model tuned to the predictability inherent in natural sounds best explained the behavioral results, suggesting that the perceptual hierarchy parallels the predictive content of natural sounds.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e10990
Author(s):  
Jonathan W. M. Engelberg ◽  
Jay W. Schwartz ◽  
Harold Gouzoules

Screams occur across taxonomically widespread species, typically in antipredator situations, and are strikingly similar acoustically, but in nonhuman primates, they have taken on acoustically varied forms in association with more contextually complex functions related to agonistic recruitment. Humans scream in an even broader range of contexts, but the extent to which acoustic variation allows listeners to perceive different emotional meanings remains unknown. We investigated how listeners responded to 30 contextually diverse human screams on six different emotion prompts as well as how selected acoustic cues predicted these responses. We found that acoustic variation in screams was associated with the perception of different emotions from these calls. Emotion ratings generally fell along two dimensions: one contrasting perceived anger, frustration, and pain with surprise and happiness, roughly associated with call duration and roughness, and one related to perceived fear, associated with call fundamental frequency. Listeners were more likely to rate screams highly in emotion prompts matching the source context, suggesting that some screams conveyed information about emotional context, but it is noteworthy that the analysis of screams from happiness contexts (n = 11 screams) revealed that they more often yielded higher ratings of fear. We discuss the implications of these findings for the role and evolution of nonlinguistic vocalizations in human communication, including consideration of how the expanded diversity in calls such as human screams might represent a derived function of language.


2020 ◽  
Vol 63 (11) ◽  
pp. 3659-3679
Author(s):  
Julie D. Anderson ◽  
Stacy A. Wagovich ◽  
Levi Ofoe

Purpose The purpose of this study was to examine cognitive flexibility for semantic and perceptual information in preschool children who stutter (CWS) and who do not stutter (CWNS). Method Participants were 44 CWS and 44 CWNS between the ages of 3;0 and 5;11 (years;months). Cognitive flexibility was measured using semantic and perceptual categorization tasks. In each task, children were required to match a target object with two different semantic or perceptual associates. Main dependent variables were reaction time and accuracy. Results The accuracy with which CWS and CWNS shifted between one semantic and perceptual representation to another was similar, but the CWS did so significantly more slowly. Both groups of children had more difficulty switching between perceptual representations than semantic ones. Conclusion CWS are less efficient (slower), though not less accurate, than CWNS in their ability to switch between different representations in both the verbal and nonverbal domains.


2020 ◽  
Author(s):  
Marissa Yetter ◽  
Sophia Robert ◽  
Grace Mammarella ◽  
Barry Richmond ◽  
Mark A. G. Eldridge ◽  
...  

AbstractThe current experiment investigated the extent to which perceptual categorization of animacy, i.e. the ability to discriminate animate and inanimate objects, is facilitated by image-based features that distinguish the two object categories. We show that, with nominal training, naïve macaques could classify a trial-unique set of 1000 novel images with high accuracy. To test whether image-based features that naturally differ between animate and inanimate objects, such as curvilinear and rectilinear information, contribute to the monkeys’ accuracy, we created synthetic images using an algorithm that distorted the global shape of the original animate/inanimate images while maintaining their intermediate features (Portilla and Simoncelli, 2000). Performance on the synthesized images was significantly above chance and was predicted by the amount of curvilinear information in the images. Our results demonstrate that, without training, macaques can use an intermediate image feature, curvilinearity, to facilitate their categorization of animate and inanimate objects.


2020 ◽  
pp. 3-13
Author(s):  
C.E.R. Edmunds ◽  
A.B. Inkster ◽  
P.M. Jones ◽  
F. Milton ◽  
A.J. Wills

Analogical transfer has been previously reported to occur between rule-based, but not information-integration, perceptual category structures (Casale, Roeder, & Ashby, 2012). The current study investigated whether a similar pattern of results would be observed in cross-modality transfer. Participants were trained on either a rule-based structure, or an information-integration structure, using visual stimuli. They were then tested on auditory stimuli that had the same underlying abstract category structure. Transfer performance was assessed relative to a control group who did not receive training on the visual stimuli. No cross-modality transfer was found, irrespective of the category structure employed.


2020 ◽  
pp. 002383092094324
Author(s):  
Hyunju Chung ◽  
Benjamin Munson ◽  
Jan Edwards

The present study examined the center and size of naïve adult listeners’ vowel perceptual space (VPS) in relation to listener language (LL) and talker age (TA). Adult listeners of three different first languages, American English, Greek, and Korean, categorized and rated the goodness of different vowels produced by 2-year-olds and 5-year-olds and adult speakers of those languages, and speakers of Cantonese and Japanese. The center (i.e., mean first and second formant frequencies (F1 and F2)) and size (i.e., area in the F1/F2 space) of VPSs that were categorized either into /a/, /i/, or /u/ were calculated for each LL and TA group. All center and size calculations were weighted by the goodness rating of each stimulus. The F1 and F2 values of the vowel category (VC) centers differed significantly by LL and TA. These effects were qualitatively different for the three vowel categories: English listeners had different /a/ and /u/ centers than Greek and Korean listeners. The size of VPSs did not differ significantly by LL, but did differ by TA and VCs: Greek and Korean listeners had larger vowel spaces when perceiving vowels produced by 2-year-olds than by 5-year-olds or adults, and English listeners had larger vowel spaces for /a/ than /i/ or /u/. Findings indicate that vowel perceptual categories of listeners varied by the nature of their native vowel system, and were sensitive to TA.


Sign in / Sign up

Export Citation Format

Share Document