similarity ratings
Recently Published Documents


TOTAL DOCUMENTS

98
(FIVE YEARS 22)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Peter Donhauser ◽  
Denise Klein

Here we describe a Javascript toolbox to perform online rating studies with auditory material. The main feature of the toolbox is that audio samples are associated with visual tokens on the screen that control audio playback and can be manipulated depending on the type of rating. This allows the collection of single- and multi-dimensional feature ratings, as well as categorical and similarity ratings. The toolbox (github.com/pwdonh/audio_tokens) can be used via a plugin for the widely-used jsPsych, as well as using plain Javascript for custom applications. We expect the toolbox to be useful in psychological research on speech and music perception, as well as for the curation and annotation of datasets in machine learning.


2021 ◽  
Author(s):  
Kira Wegner-Clemens ◽  
George Law Malcolm ◽  
Sarah Shomstein

Semantic information about objects, events, and scenes influences how humans perceive, interact with, and navigate the world. Most evidence in support of semantic influence on cognition has been garnered from research conducted with an isolated modality (e.g., vision, audition). However, the influence of semantic information has not yet been extensively studied in multisensory environments potentially because of the difficulty in quantification of semantic relatedness. Past studies have primary relied on either a simplified binary classification of semantic relatedness based on category or on algorithmic values based on text corpora rather than human perceptual experience and judgement. With the aim to accelerate research into multisensory semantics, we created a constrained audiovisual stimulus set and derived similarity ratings between items within three categories (animals, instruments, household items). A set of 140 participants provided similarity judgments between sounds and images. Participants either heard a sound (e.g., a meow) and judged which of two pictures of objects (e.g., a picture of a dog and a duck) it was more similar to, or saw a picture (e.g., a picture of a duck) and selected which of two sounds it was more similar to (e.g., a bark or a meow). Judgements were then used to calculate similarity values of any given cross-modal pair. The derived and reported similarity judgements reflect a range of semantic similarities across three categories and items, and highlight similarities and differences among similarity judgments between modalities. We make the derived similarity values available in a database format to the research community to be used as a measure of semantic relatedness in cognitive psychology experiments, enabling more robust studies of semantics in audiovisual environments.


2021 ◽  
Author(s):  
Zhuohan Jiang ◽  
D. Merika W. Sanders ◽  
Rosemary Cowell

We collected visual and semantic similarity norms for a set of photographic images comprising 120 recognizable objects/animals and 120 indoor/outdoor scenes. Human observers rated the similarity of pairs of images within four categories of stimulus ‒ inanimate objects, animals, indoor scenes and outdoor scenes ‒ via Amazon's Mechanical Turk. We performed multi-dimensional scaling (MDS) on the collected similarity ratings to visualize the perceived similarity for each image category, for both visual and semantic ratings. The MDS solutions revealed the expected similarity relationships between images within each category, along with intuitively sensible differences between visual and semantic similarity relationships for each category. Stress tests performed on the MDS solutions indicated that the MDS analyses captured meaningful levels of variance in the similarity data. These stimuli, associated norms and naming data are made publicly available, and should provide a useful resource for researchers of vision, memory and conceptual knowledge wishing to run experiments using well-parameterized stimulus sets.


2021 ◽  
Author(s):  
Louis Martí ◽  
Shengyi Wu ◽  
Steven T. Piantadosi ◽  
Celeste Kidd

Many social and legal conflicts come down to differences in semantics. Yet, semantic variation between individuals and people’s awareness of this variation have been relatively neglected by experimental psychology. Here, across two experiments, we quantify the amount of agreement and disagreement between ordinary semantic concepts in thepopulation, as well as people’s meta-cognitive awareness of these differences. We collect similarity ratings and feature judgements, and analyze them using a non-parametricclustering scheme with an ecological statistical estimator to infer the number of different meanings for the same word that is present in the population. We find that typically atleast ten to twenty variants of meanings exist for even common nouns, but that people are unaware of this variation. Instead, people exhibit a strong bias to erroneously believe that other people share their particular semantics, pointing to one factor that likely interfereswith political and social discourse.


2021 ◽  
Vol 25 (1) ◽  
pp. 22-38
Author(s):  
David Allen ◽  
Trevor Holster

A robust finding in psycholinguistics is that cognates and loanwords, which are words that typically share some degree of form and meaning across languages, provide the second language learner with benefits in language use when compared to words that do not share form and meaning across languages. This cognate effect has been shown to exist for Japanese learners of English; that is, words such as table are processed faster and more accurately in English because they have a loanword equivalent in Japanese (i.e., テーブル /te:buru/ ‘table’). Previous studies have also shown that the degree of phonological and semantic similarity, as measured on a numerical scale from ‘completely different’ to ‘identical’, also influences processing. However, there has been relatively little appraisal of such cross-linguistic similarity ratings themselves. Therefore, the present study investigated the structure of the similarity ratings using Rasch analysis, which is an analytic approach frequently used in the design and validation of language assessments. The findings showed that a 4-point scale may be optimal for phonological similarity ratings of cognates and a 2-point scale may be most appropriate for semantic similarity ratings. Furthermore, this study reveals that while a few raters and items misfitted the Rasch model, there was substantial agreement in ratings, especially for semantic similarity. The results validate the ratings for use in research and demonstrate the utility of Rasch analysis in the design and validation of research instruments in psychology.


Author(s):  
David Allen ◽  
Trevor Holster

A robust finding in psycholinguistics is that cognates and loanwords, which are words that typically share some degree of form and meaning across languages, provide the second language learner with benefits in language use when compared to words that do not share form and meaning across languages. This cognate effect has been shown to exist for Japanese learners of English; that is, words such as table are processed faster and more accurately in English because they have a loanword equivalent in Japanese (i.e., テーブル /te:buru/ ‘table’). Previous studies have also shown that the degree of phonological and semantic similarity, as measured on a numerical scale from ‘completely different’ to ‘identical’, also influences processing. However, there has been relatively little appraisal of such cross-linguistic similarity ratings themselves. Therefore, the present study investigated the structure of the similarity ratings using Rasch analysis, which is an analytic approach frequently used in the design and validation of language assessments. The findings showed that a 4-point scale may be optimal for phonological similarity ratings of cognates and a 2-point scale may be most appropriate for semantic similarity ratings. Furthermore, this study reveals that while a few raters and items misfitted the Rasch model, there was substantial agreement in ratings, especially for semantic similarity. The results validate the ratings for use in research and demonstrate the utility of Rasch analysis in the design and validation of research instruments in psychology.


2021 ◽  
Author(s):  
David White ◽  
Tanya Wayne ◽  
Victor Perrone de Lima Varela

Accurately recognising faces is fundamental to human social interaction. In recent years it has become clear that people’s accuracy differs markedly depending on viewer’s familiarity with a face and their individual skill, but the cognitive and neural bases of these accuracy differences are not understood. We examined cognitive representations underlying these accuracy differences by measuring similarity ratings to natural facial image variation. Using image averaging, and inspired by the computation of Analysis of Variance, we partitioned image variation into differences between faces (between-identity variation) and differences between photos of the same face (within-identity variation). Contrary to prevailing accounts of human face recognition and perceptual learning, we found that modulation of within-identity variation – rather than between-identity variation – was associated with high accuracy. First, similarity of within-identity variation was compressed for familiar faces relative to unfamiliar faces. Second, viewers that are extremely accurate in face recognition – ‘super-recognisers’ – showed enhanced compression of within-identity variation that was most marked for familiar faces. We also present computational analysis showing that cognitive transformations of between- and within-identity variation make separable contributions to perceptual expertise in unfamiliar and familiar face identification respectively. We conclude that inter- and intra-individual accuracy differences primarily arise from differences in the representation of familiar face image variation.


Author(s):  
Eugene Poh ◽  
Naser Al-Fawakari ◽  
Rachel Tam ◽  
Jordan A. Taylor ◽  
Samuel D. McDougle

ABSTRACTTo generate adaptive movements, we must generalize what we have previously learned to novel situations. The generalization of learned movements has typically been framed as a consequence of neural tuning functions that overlap for similar movement kinematics. However, as is true in many domains of human behavior, situations that require generalization can also be framed as inference problems. Here, we attempt to broaden the scope of theories about motor generalization, hypothesizing that part of the typical motor generalization function can be characterized as a consequence of top-down decisions about different movement contexts. We tested this proposal by having participants make explicit similarity ratings over traditional contextual dimensions (movement directions) and abstract contextual dimensions (target shape), and perform a visuomotor adaptation generalization task where trials varied over those dimensions. We found support for our predictions across five experiments, which revealed a tight link between subjective similarity and motor generalization. Our findings suggest that the generalization of learned motor behaviors is influenced by both low-level kinematic features and high-level inferences.


2021 ◽  
Vol 9 ◽  
pp. 1425-1441
Author(s):  
Juri Opitz ◽  
Angel Daza ◽  
Anette Frank

Abstract Several metrics have been proposed for assessing the similarity of (abstract) meaning representations (AMRs), but little is known about how they relate to human similarity ratings. Moreover, the current metrics have complementary strengths and weaknesses: Some emphasize speed, while others make the alignment of graph structures explicit, at the price of a costly alignment step. In this work we propose new Weisfeiler-Leman AMR similarity metrics that unify the strengths of previous metrics, while mitigating their weaknesses. Specifically, our new metrics are able to match contextualized substructures and induce n:m alignments between their nodes. Furthermore, we introduce a Benchmark for AMR Metrics based on Overt Objectives (Bamboo), the first benchmark to support empirical assessment of graph-based MR similarity metrics. Bamboo maximizes the interpretability of results by defining multiple overt objectives that range from sentence similarity objectives to stress tests that probe a metric’s robustness against meaning-altering and meaning- preserving graph transformations. We show the benefits of Bamboo by profiling previous metrics and our own metrics. Results indicate that our novel metrics may serve as a strong baseline for future work.


2020 ◽  
pp. 030573562097103
Author(s):  
Matthew Moritz ◽  
Matthew Heard ◽  
Hyun-Woong Kim ◽  
Yune S Lee

Despite the long history of music psychology, rhythm similarity perception remains largely unexplored. Several studies suggest that edit-distance—the minimum number of notational changes required to transform one rhythm into another—predicts similarity judgments. However, the ecological validity of edit-distance remains elusive. We investigated whether the edit-distance model can predict perceptual similarity between rhythms that also differed in a fundamental characteristic of music—tempo. Eighteen participants rated the similarity between a series of rhythms presented in a pairwise fashion. The edit-distance of these rhythms varied from 1 to 4, and tempo was set at either 90 or 150 beats per minute (BPM). A test of congruence among distance matrices (CADM) indicated significant inter-participant reliability of ratings, and non-metric multidimensional scaling (nMDS) visualized that the ratings were clustered based upon both tempo and whether rhythms shared an identical onset pattern, a novel effect we termed rhythm primacy. Finally, Mantel tests revealed significant correlations of edit-distance with similarity ratings on both within- and between-tempo rhythms. Our findings corroborated that the edit-distance predicts rhythm similarity and demonstrated that the edit-distance accounts for similarity of rhythms that are markedly different in tempo. This suggests that rhythmic gestalt is invariant to differences in tempo.


Sign in / Sign up

Export Citation Format

Share Document