stimulus set
Recently Published Documents


TOTAL DOCUMENTS

151
(FIVE YEARS 39)

H-INDEX

21
(FIVE YEARS 2)

2021 ◽  
pp. 030573562110506
Author(s):  
Clémence Nineuil ◽  
Delphine Dellacherie ◽  
Séverine Samson

The aim of this study was to obtain French affective norms for the film music stimulus set (FMSS). This data set consists of a relatively homogeneous series of musical stimuli made up of film music excerpts, known to trigger strong emotion. The 97 musical excerpts were judged by 194 native French participants using a simplified normative procedure in order to assess valence and arousal judgments. This normalization will (1) provide researchers with standardized rated affective music to be used with a French population, (2) enable the investigation of individual listeners’ differing emotional judgments, and (3) explore how cultural differences affect the ratings of musical stimuli. Our results, in line with those obtained in Finland and Spain, demonstrated the FMSS to be robust and interculturally valid within Western Europe. Age, sex, education, and musical training were not found to have any effects on emotional judgments. In conclusion, this study provides the scientific community with a standardized-stimulus set of musical excerpts whose emotional valence and arousal have been validated by a sampling of the French population.


2021 ◽  
Vol 20 (4) ◽  
Author(s):  
Dax Ovid ◽  
Mallory M. Rice ◽  
Joshua Vargas Luna ◽  
Karen Tabayoyong ◽  
Parinaz Lajevardi ◽  
...  

Students were asked to evaluate a stimulus set of previously recorded Instructor Talk quotes as positive or negative, and whether students could recall noncontent instructor language was investigated. Overall, students’ evaluations of Instructor Talk quotes were in alignment with researchers, and most students could recall memories of Instructor Talk.


2021 ◽  
Author(s):  
Jan Stupacher ◽  
Markus Wrede ◽  
Peter Vuust

The experience of groove is defined as a pleasurable state of wanting to move one’s body in relation to the pulse of a musical rhythm. Most individuals feel a strong desire to move their body when listening to music with a moderate amount of rhythmic complexity, whereas low and high amounts of rhythmic complexity decrease the desire to move (Matthews et al., 2019; Witek et al., 2014). Matthews and colleagues (2019) additionally investigated the influence of harmonic complexity on the sensation of groove and found that wanting to move ratings were similar for low and moderately complex harmonies, but dropped for a highly complex harmony. The present study tests whether these effects of rhythmic and harmonic complexity can be replicated with a subset of 9 stimuli from the original set of 54 stimuli used by Matthews and colleagues (2019). In line with previous research by Matthews et al. (2019) and Witek et al. (2014), groove ratings followed an inverted U-shape when plotted against rhythmic complexity. The strongest sensation of groove was reported for patterns with a moderate amount of rhythmic complexity, followed by low and high rhythmic complexity. The manipulation of harmonic complexity also led to similar results as in Matthews et al. (2019): Groove ratings were highest for low harmonic complexity followed by moderate and high harmonic complexity.


2021 ◽  
Author(s):  
Marco Gandolfo ◽  
Hendrik Naegele ◽  
Marius V. Peelen

Boundary extension (BE) is a classical memory illusion in which observers remember more of a scene than was presented. According to predictive accounts, BE reflects the integration of visual input and expectations of what is beyond the boundaries of a scene. Alternatively, according to normalization accounts, BE reflects one end of a normalization process towards the typically-experienced viewing distance of a scene, such that BE and boundary contraction (BC) are equally common. Here, we show that BE and BC depend on depth-of-field (DOF), as determined by the aperture settings on a camera. Photographs with naturalistic DOF led to the strongest BE across a large stimulus set, while BC was primarily observed for unnaturalistic DOF. The relationship between DOF and BE was confirmed in three controlled experiments that isolated DOF from co-varying factors. In line with predictive accounts, we propose that BE is strongest for scene images that resemble day-to-day visual experience.


Phonetica ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Jason A. Shaw ◽  
Shigeto Kawahara

Abstract Past work investigating the lingual articulation of devoiced vowels in Tokyo Japanese has revealed optional but categorical deletion. Some devoiced vowels retained a full lingual target, just like their voiced counterparts, whereas others showed trajectories that are best modelled as targetless, i.e., linear interpolation between the surrounding vowels. The current study explored the hypothesis that this probabilistic deletion is modulated by the identity of the surrounding consonants. A new EMA experiment with an extended stimulus set replicates the core finding of Shaw, Jason & Shigeto Kawahara. 2018b. The lingual gesture of devoiced [u] in Japanese. Journal of Phonetics 66. 100–119. DOI:https://doi.org/10.1016/j.wocn.2017.09.007 that Japanese devoiced [u] sometimes lacks a tongue body raising gesture. The current results moreover show that surrounding consonants do indeed affect the probability of tongue dorsum targetlessness. We found that deletion of devoiced vowels is affected by the place of articulation of the preceding consonant; deletion is more likely following a coronal fricative than a labial fricative. Additionally, we found that the manner combination of the flanking consonants, fricative–fricative versus fricative–stop, also has an effect, at least for some speakers; however, unlike the effect of C1 place, the direction of the manner combination effect varies across speakers with some deleting more often in fricative–stop environments and others more often in fricative–fricative environments.


2021 ◽  
Vol 12 ◽  
Author(s):  
Keisuke Irie ◽  
Shuo Zhao ◽  
Kazuhiro Okamoto ◽  
Nan Liang

Introduction: The effect of promoting a physical reaction by the described action is called the action-sentence compatibility effect (ACE). It has been verified that physical motion changes depending on the time phase and grammatical expression. However, it is unclear how adverbs and onomatopoeia change motion simulations and subsequent movements.Methods: The subjects were 35 healthy adults (11 females; mean age 21.3). We prepared 20 sentences each, expressing actions related to hands and feet. These were converted into 80 sentences (stimulus set A), with the words “Slow” or “Quick” added to the words related to the speed of movement, and 80 sentences (stimulus set B) with the words “Fast” and onomatopoeia “Satto” added. Additionally, 20 unnatural sentences were prepared for each stimulus set as pseudo sentences. Choice reaction time was adopted; subjects pressed the button with their right hand only when the presented text was correctly understood (Go no-go task). The reaction time (RTs) and the number of errors (NoE) were recorded and compared.Results: As a result of a two-way repeated ANOVA, an interaction effect (body parts × words) was observed in RTs and NoE in set A. “Hand and Fast” had significantly faster RTs than “Hand and Slow” and “Foot and Fast.” Furthermore, “Hand and Fast” had a significantly higher NoE than others. In set B, the main effects were observed in both RTs and NoE. “Hand” and “Satto” had significantly faster RTs than “Foot” and “Quick,” respectively. Additionally, an interaction effect was observed in NoE, wherein “Foot and Satto” was significantly higher than “Hand and Satto” and “Foot and Quick.”Conclusion: In this study, the word “Fast” promoted hand response, reaffirming ACE. The onomatopoeia “Satto” was a word that conveys the speed of movement, but it was suggested that the degree of understanding may be influenced by the body part and the attributes of the subject.


2021 ◽  
Author(s):  
Patrick E. Savage ◽  
Yuto Ozaki ◽  
Sandra E. Trehub

The original paper’s sampling criteria involved selecting lullabies that adults rated as most likely to soothe a baby and non-lullabies rated as least likely to do so. Our analysis shows that lullabies in the stimulus set had systematically higher recording quality than non-lullabies, and those differences in recording quality were substantially greater than the paper’s primary pre-registered analysis comparing infant heart rate when listening to lullabies vs. non-lullabies (original effect size: d=0.23). Accordingly, the authors’ conclusion that infants relax more in response to unfamiliar foreign lullabies than to non-lullabies may be an artefact of their sampling methods.


2021 ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K Robinson ◽  
Martin N Hebart ◽  
Thomas A Carlson

The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, 'big data' stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


2021 ◽  
Author(s):  
Emilie Louise Josephs ◽  
Haoyun Zhao ◽  
Talia Konkle

Near-scale spaces are a key component of our visual experience: whether for work or for leisure, we spend much of our days immersed in, and acting upon, the world within reach. Here, we present the Reachspace Database (RSDB), a novel stimulus set containing over 10,000 images depicting first person, reachable-scale, motor-relevant views (hereafter “reachspaces”), which reflect the visual input that an agent would experience while performing a task with her hands. These images are divided into over 350 categories, based on a taxonomy we developed, which captures information relating to the identity of each reachspace, including the broader setting and room it is found in, the locus of interaction (e.g., kitchen counter, desk), and the specific action it affords. Summary analyses of the taxonomy labels in the database suggest a tight connection between activities and the interaction spaces that support them: while a small number of rooms and interaction loci afford many diverse actions (e.g. workshops, tables), most reachspaces were relatively specialized, typically affording only one main activity (e.g. gas station pump, airplane cockpit, kitchen cutting board). Overall, this Reachspace Database represents a large sampling of reachable environments, and provides a new resource to support behavioral and neural research into the visual representation of reachable environments.


2021 ◽  
Author(s):  
Maxi Becker ◽  
Roberto Cabeza

Most creativity measures are either complex or language-dependent, hindering cross-cultural creativity assessment. We have therefore developed and tested a simple, language-independent insight task based on pictures in the style of the widely used verbal remote associate task (RAT). We demonstrate that the language-independent RAT allows assessment of different aspects of insight across large samples with different languages. It also correlates with other creativity and general problem solving tasks. The entire stimulus set, including its normative data, is made freely available. This information can be used to select items based on accuracy, mean solution time, likelihood to produce an insight, or conceptual and perceptual similarity between the pictures per item.


Sign in / Sign up

Export Citation Format

Share Document