Comparing unfamiliar voice and face identity perception using identity sorting tasks
Identity sorting tasks, where participants sort a number of naturally varying stimuli of usually two identities into perceived identities, have recently gained popularity in voice and face processing research. For both modalities, striking similarities in the results of these sorting tasks are apparent: Participants who are unfamiliar with the identities usually struggle to accurately perceive identities from these variable stimuli. They tend to perceive multiple stimuli of the same identity as different people and thus fail to “tell people together”. These similarities in the reported results may suggest that modality-general mechanisms underpin the completion of sorting tasks. In the current study, participants completed a voice sorting and a face sorting task. Taking an individual differences approach, we therefore asked whether there is a relationship between participants’ performance on voice and face sorting of unfamiliar identities. Participants additionally completed a voice discrimination (Bangor Voice Matching Test) and face discrimination task (Glasgow Face Matching Test). Using these data, we furthermore tested whether performance on sorting tasks can be related to explicit identity discrimination tasks. Performance on voice sorting and face sorting tasks was correlated, suggesting that common modality-general processes underpin these tasks. However, these do not straightforwardly appear to be the same processes supporting identity discrimination: No significant correlations were found between sorting and discrimination performance, with the exception of significant relationships when correlating performance on same trials with “telling people together” for voices and faces. Overall, the reported relationships were relatively weak, suggesting the presence of additional modality-specific and task-specific processes.