auditory sequence
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 5)

H-INDEX

14
(FIVE YEARS 1)

2020 ◽  
Vol 37 (3) ◽  
pp. 196-207
Author(s):  
Fiona C. Manning ◽  
Anna Siminoski ◽  
Michael Schutz

We explore the effects of trained musical movements on sensorimotor interactions in order to clarify the interpretation of previously observed expertise differences. Pianists and non-pianists listened to an auditory sequence and identified whether the final event occurred in time with the sequence. In half the trials participants listened without moving, and in half they synchronized keystrokes while listening. Pianists and non-pianists were better able to identify the timing of the final tone after synchronizing keystrokes compared to listening only. Curiously, this effect of movement did not differ between pianists and non-pianists despite substantial training differences with respect to finger movements. We also found few group differences in the ability to align keystrokes with events in the auditory sequence; however, movements were less variable (lower coefficient of variation) in pianists compared to non-pianists. Consistent with the idea that the benefits of synchronization on rhythm perception are constrained by motor effector kinematics, this work helps clarify previous findings in this paradigm. We discuss these outcomes in light of training and the kinematics involved in pianist keystrokes compared to musicians synchronizing movements in other studies. We also overview how these differences across motor effector synchronization and training must be accounted for in models of perception and action.


Author(s):  
Julia Simner

Synaesthesia manifests in many different ways, and this poses a challenge in setting out a definition. ‘What is synaesthesia?’ explains that the triggers (inducers) and the resulting unusual associated sensations (concurrents) can be all manner of sensations, or even intangible concepts of personality, meaning, space, and time. It describes several different types of synaesthesia, including sound–colour, lexical–gustatory, visual–auditory, sequence–space, grapheme–colour, and sequence–personality synaesthesia. The number of people with synaesthesia make up around 4.4% of the population and it appears to affect men and women in equal numbers. Despite being a rare condition, synaesthesia provides intriguing information about how the mind interprets reality and how information is organised by the brain.


2018 ◽  
Vol 11 (2) ◽  
Author(s):  
Hsin-I Liao ◽  
Yoneya Makoto ◽  
Makio Kashino ◽  
Shigeto Furukawa

There are indications that the pupillary dilation response (PDR) reflects surprising moments in an auditory sequence such as the appearance of a deviant noise against repetitively presented pure tones (Liao, Yoneya, Kidani, Kashino, & Furukawa, 2016), and salient and loud sounds that are evaluated by human participants subjectively (Liao, Kidani, Yoneya, Kashino, & Furukawa, 2016). In the current study, we further examined whether the reflection of PDR in auditory surprise can be accumulated and revealed in complex and yet structured auditory stimuli, i.e., music, and when the surprise is defined subjectively. Participants listened to 15 excerpts of music while their pupillary responses were recorded. In the surprise-rating session, participants rated how surprising an instance in the excerpt was, i.e., rich in variation versus monotonous, while they listened to it. In the passive-listening session, they listened to the same 15 excerpts again but were not involved in any task. The pupil diameter data obtained from both sessions were time-aligned to the rating data obtained from the surprise-rating session. Results showed that in both sessions, mean pupil diameter was larger at moments rated more surprising than unsurprising. The result suggests that the PDR reflects surprise in music automatically.


2018 ◽  
Vol 9 (1) ◽  
Author(s):  
Peiqing Jin ◽  
Jiajie Zou ◽  
Tao Zhou ◽  
Nai Ding
Keyword(s):  

2018 ◽  
Author(s):  
Lihan Chen ◽  
Xiaolin Zhou ◽  
Hermann J. Müller ◽  
Zhuanghua Shi

AbstractIn our multisensory world, we often rely more on auditory information than on visual input for temporal processing. One typical demonstration of this is that the rate of auditory flutter assimilates the rate of concurrent visual flicker. To date, however, this auditory dominance effect has largely been studied using regular auditory rhythms. It thus remains unclear whether irregular rhythms would have a similar impact on visual temporal processing; what information is extracted from the auditory sequence that comes to influence visual timing; and how the auditory and visual temporal rates are integrated together in quantitative terms. We investigated these questions by assessing, and modeling, the influence of a task-irrelevant auditory sequence on the type of ‘Ternus apparent motion’: group motion versus element motion. The type of motion seen critically depends on the time interval between the two Ternus display frames. We found that an irrelevant auditory sequence preceding the Ternus display modulates the visual interval, making observers perceive either more group motion or more element motion. This biasing effect manifests whether the auditory sequence is regular or irregular, and it is based on a summary statistic extracted from the sequential intervals: their geometric mean. However, the audiovisual interaction depends on the discrepancy between the mean auditory and visual intervals: if it becomes too large, no interaction occurs – which can be quantitatively described by a partial Bayesian integration model. Overall, our findings reveal a crossmodal perceptual averaging principle that may underlie complex audiovisual interactions in many everyday dynamic situations.Public Significance StatementThe present study shows that auditory rhythms, regardless of their regularity, can influence the way in which the visual system times (subsequently presented) events, thereby altering dynamic visual (motion) perception. This audiovisual temporal interaction is based on a summary statistic derived from the auditory sequence: the geometric mean interval, which is then combined with the visual interval in a process of partial Bayesian integration (where integration is unlikely to occur if the discrepancy between the auditory and visual intervals is too large). We propose that this crossmodal perceptual averaging principle underlies complex audiovisual interactions in many everyday dynamic perception scenarios.Author NoteThis study was supported by grants from the Natural Science Foundation of China (31200760, 61621136008, 61527804), German DFG project SH166 3/1 and “projektbezogener Wissenschaftleraustausch” (proWA). The data, and the source code of statistical analysis and modeling are available at https://github.com/msenselab/temporal_averaging. Part of the study has been presented as a talk in 17th International Multisensory Research Forum (IMRF, June 2016, Suzhou, China).


2018 ◽  
Vol 21 ◽  
pp. 112-119 ◽  
Author(s):  
Jutta L Mueller ◽  
Alice Milne ◽  
Claudia Männel

2017 ◽  
Author(s):  
Stefano Ghirlanda

Ravignani et al. (2013) habituated squirrel monkeys to sound sequences conforming to an ABnB grammar, then tested them for the ability to identify novel grammatical sequences as well as non-grammatical ones. Although they conclude that the monkeys "consistently recognized and generalized the sequence ABnA," the data indicate very poor generalization. Pattern grammaticality accounted for at most 6% of the variance in responding. In addition, the statistical significance of results depends on specific choices of data analysis (dichotomization of the response variable and omission of certain data points) which appear to have a weak rationale. I also suggest that the task used by Ravignani et al. (2013) may be fruitfully analyzed as an auditory sequence discrimination task that does not require specific proto-linguistic abilities.


Sign in / Sign up

Export Citation Format

Share Document