auditory cues
Recently Published Documents


TOTAL DOCUMENTS

423
(FIVE YEARS 123)

H-INDEX

35
(FIVE YEARS 4)

2022 ◽  
Vol 2 (1) ◽  
Author(s):  
Julia Henriksen ◽  
Malin Hornebrant ◽  
Adele Berndt

AbstractOnline casinos are one of Sweden’s largest gambling sectors. Increased advertising investment and advertising frequency have sought to attract Generation Y consumers to these casinos, yet it has been suggested that advertising can contribute to avoidance behaviours towards products and services, including online casinos and specific gambling brands. The various advertising aspects used in gambling advertising and their impact on behaviour have not been widely researched. Thus, the purpose of this study was to explore the use of creative strategies in casino advertising and how it contributes to the avoidance of online casinos, specifically among Swedish Generation Y consumers. As an exploratory study, qualitative methods were used. Initially, 13 casino advertisements were analysed to identify the strategies used in the advertisements. These were then presented to Generation Y consumers in three focus groups and six in-depth interviews. The analysis of the advertising shows the use of people and characters in presenting the casino brand. Male voice-overs were utilised in addition to music and other casino-related sounds. The advertising also used bright colours to attract attention. The impact of these advertisements is that the content, the auditory cues rather than just music, the emotional response, and the frequency of the advertising were found to contribute to the avoidance of casino brands. Furthermore, the ethics and general attitudes to the industry impact the decision to avoid these brands. The managerial implication of this research shows the impact of advertisements on the decision to avoid a brand, specifically a casino brand.


Ethology ◽  
2021 ◽  
Author(s):  
Nynke Wemer ◽  
Vincent N. Naude ◽  
Vincent C. Merwe ◽  
Marna Smit ◽  
Gerhard Lange ◽  
...  

2021 ◽  
Author(s):  
Marlies Oostland ◽  
Mikhail Kislin ◽  
Yuhang Chen ◽  
Tiffany Chen ◽  
Sarah Jo Venditto ◽  
...  

Among the impairments manifested by autism spectrum disorder (ASD) are sometimes islands of enhanced function. Although neuronal mechanisms for enhanced functions in ASD are unknown, the cerebellum is a major site of developmental alteration, and early-life perturbation to it leads to ASD with higher likelihood than any other brain region. Here we report that a cerebellum-specific transgenic mouse model of ASD shows faster learning on a sensory evidence-accumulation task. In addition, transgenic mice showed enhanced sensitivity to touch and auditory cues, and prolonged electrophysiological responses in Purkinje-cell complex spikes and associative neocortical regions. These findings were replicated by pairing cues with optogenetic stimulation of Purkinje cells. Computational latent-state analysis of behavior revealed that both groups of mice with cerebellar perturbations exhibited enhanced focus on current rather than past information, consistent with a role for the cerebellum in retaining information in memory. We conclude that cerebellar perturbation can activate neocortex via complex spike activity and reduce reliance on prior experience, consistent with a weak-central-coherence account in which ASD traits arise from enhanced detail-oriented processing. This recasts ASD not so much as a disorder but as a variation that, in particular niches, can be adaptive.


2021 ◽  
Vol 5 (4) ◽  
pp. 79
Author(s):  
Radha Nila Meghanathan ◽  
Patrick Ruediger-Flore ◽  
Felix Hekele ◽  
Jan Spilski ◽  
Achim Ebert ◽  
...  

Although the focus of Virtual Reality (VR) lies predominantly on the visual world, acoustic components enhance the functionality of a 3D environment. To study the interaction between visual and auditory modalities in a 3D environment, we investigated the effect of auditory cues on visual searches in 3D virtual environments with both visual and auditory noise. In an experiment, we asked participants to detect visual targets in a 360° video in conditions with and without environmental noise. Auditory cues indicating the target location were either absent or one of simple stereo or binaural audio, both of which assisted sound localization. To investigate the efficacy of these cues in distracting environments, we measured participant performance using a VR headset with an eye tracker. We found that the binaural cue outperformed both stereo and no auditory cues in terms of target detection irrespective of the environmental noise. We used two eye movement measures and two physiological measures to evaluate task dynamics and mental effort. We found that the absence of a cue increased target search duration and target search path, measured as time to fixation and gaze trajectory lengths, respectively. Our physiological measures of blink rate and pupil size showed no difference between the different stadium and cue conditions. Overall, our study provides evidence for the utility of binaural audio in a realistic, noisy and virtual environment for performing a target detection task, which is a crucial part of everyday behaviour—finding someone in a crowd.


Author(s):  
R. I. M. Dunbar ◽  
Juan-Pablo Robledo ◽  
Ignacio Tamarit ◽  
Ian Cross ◽  
Emma Smith

AbstractThe claim that nonverbal cues provide more information than the linguistic content of a conversational exchange (the Mehrabian Conjecture) has been widely cited and equally widely disputed, mainly on methodological grounds. Most studies that have tested the Conjecture have used individual words or short phrases spoken by actors imitating emotions. While cue recognition is certainly important, speech evolved to manage interactions and relationships rather than simple information exchange. In a cross-cultural design, we tested participants’ ability to identify the quality of the interaction (rapport) in naturalistic third party conversations in their own and a less familiar language, using full auditory content versus audio clips whose verbal content has been digitally altered to differing extents. We found that, using nonverbal content alone, people are 75–90% as accurate as they are with full audio cues in identifying positive vs negative relationships, and 45–53% as accurate in identifying eight different relationship types. The results broadly support Mehrabian’s claim that a significant amount of information about others’ social relationships is conveyed in the nonverbal component of speech.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Timo Melman ◽  
Peter Visser ◽  
Xavier Mouton ◽  
Joost de Winter

Modern computerized vehicles offer the possibility of changing vehicle parameters with the aim of creating a novel driving experience, such as an increased feeling of sportiness. For example, electric vehicles can be designed to provide an artificial sound, and the throttle mapping can be adjusted to give drivers the illusion that they are driving a sports vehicle (i.e., without altering the vehicle’s performance envelope). However, a fundamental safety-related question is how drivers perceive and respond to vehicle parameter adjustments. As of today, human-subject research on throttle mapping is unavailable, whereas research on sound enhancement is mostly conducted in listening rooms, which provides no insight into how drivers respond to the auditory cues. This study investigated how perceived sportiness and driving behavior are affected by adjustments in vehicle sound and throttle mapping. Through a within-subject simulator-based experiment, we investigated (1) Modified Throttle Mapping (MTM), (2) Artificial Engine Sound (AES) via a virtually elevated rpm, and (3) MTM and AES combined, relative to (4) a Baseline condition and (5) a Sports car that offered increased engine power. Results showed that, compared to Baseline, AES and MTM-AES increased perceived sportiness and yielded a lower speed variability in curves. Furthermore, MTM and MTM-AES caused higher vehicle acceleration than Baseline during the first second of driving away from a standstill. Mean speed and comfort ratings were unaffected by MTM and AES. The highest sportiness ratings and fastest driving speeds were obtained for the Sports car. In conclusion, the sound enhancement not only increased the perception of sportiness but also improved drivers’ speed control performance, suggesting that sound is used by drivers as functional feedback. The fact that MTM did not affect the mean driving speed indicates that drivers adapted their “gain” to the new throttle mapping and were not susceptible to risk compensation.


Author(s):  
Andrew J. Kolarik ◽  
Brian C. J. Moore ◽  
Silvia Cirstea ◽  
Rajiv Raman ◽  
Sarika Gopalakrishnan ◽  
...  

AbstractVisual spatial information plays an important role in calibrating auditory space. Blindness results in deficits in a number of auditory abilities, which have been explained in terms of the hypothesis that visual information is needed to calibrate audition. When judging the size of a novel room when only auditory cues are available, normally sighted participants may use the location of the farthest sound source to infer the nearest possible distance of the far wall. However, for people with partial visual loss (distinct from blindness in that some vision is present), such a strategy may not be reliable if vision is needed to calibrate auditory cues for distance. In the current study, participants were presented with sounds at different distances (ranging from 1.2 to 13.8 m) in a simulated reverberant (T60 = 700 ms) or anechoic room. Farthest distance judgments and room size judgments (volume and area) were obtained from blindfolded participants (18 normally sighted, 38 partially sighted) for speech, music, and noise stimuli. With sighted participants, the judged room volume and farthest sound source distance estimates were positively correlated (p < 0.05) for all conditions. Participants with visual losses showed no significant correlations for any of the conditions tested. A similar pattern of results was observed for the correlations between farthest distance and room floor area estimates. Results demonstrate that partial visual loss disrupts the relationship between judged room size and sound source distance that is shown by sighted participants.


Author(s):  
Subhradeep Roy ◽  
Jeremy Lemus

The present study investigates how combined information from audition and vision impacts group-level behavior. We consider a modification to the original Vicsek model that allows individuals to use auditory and visual sensing modalities to gather information from neighbors in order to update their heading directions. Moreover, in this model, the information from visual and auditory cues can be weighed differently. In a simulation study, we examine the sensitivity of the emergent group-level behavior to the weights that are assigned to each sense modality in this weighted composite model. Our findings suggest combining sensory cues may play an important role in the collective behavior and results from the composite model indicate that the group-level features from pure audition predominate.


Sign in / Sign up

Export Citation Format

Share Document