A Review of Hyperacusis and Future Directions: Part I. Definitions and Manifestations

2014 ◽  
Vol 23 (4) ◽  
pp. 402-419 ◽  
Author(s):  
Richard S. Tyler ◽  
Martin Pienkowski ◽  
Eveling Rojas Roncancio ◽  
Hyung Jin Jun ◽  
Tom Brozoski ◽  
...  

Purpose Hyperacusis can be extremely debilitating, and at present, there is no cure. We provide an overview of the field, and possible related areas, in the hope of facilitating future research. Method We review and reference literature on hyperacusis and related areas. We have divided the review into 2 articles. In Part I, we discuss definitions, epidemiology, different etiologies and subgroups, and how hyperacusis affects people. In Part II, we review measurements, models, mechanisms, and treatments, and we finish with some suggestions for further research. Results Hyperacusis encompasses a wide range of reactions to sound, which can be grouped into the categories of excessive loudness, annoyance, fear, and pain. Many different causes have been proposed, and it will be important to appreciate and quantify different subgroups. Reasonable approaches to assessing the different forms of hyperacusis are emerging, including psychoacoustical measures, questionnaires, and brain imaging. Conclusions Hyperacusis can make life difficult for many, forcing sufferers to dramatically alter their work and social habits. We believe this is an opportune time to explore approaches to better understand and treat hyperacusis.

2014 ◽  
Vol 23 (4) ◽  
pp. 420-436 ◽  
Author(s):  
Martin Pienkowski ◽  
Richard S. Tyler ◽  
Eveling Rojas Roncancio ◽  
Hyung Jin Jun ◽  
Tom Brozoski ◽  
...  

Purpose Hyperacusis can be extremely debilitating, and at present, there is no cure. In this detailed review of the field, we consolidate present knowledge in the hope of facilitating future research. Method We review and reference the literature on hyperacusis and related areas. This is the 2nd of a 2-part review. Results Hyperacusis encompasses a wide range of reactions to sounds, which can be grouped into the categories of excessive loudness, annoyance, fear, and pain. Reasonable approaches to assessing the different forms of hyperacusis are emerging, including brain-imaging studies. Researchers are only beginning to understand the many mechanisms at play, and valid animal models are still evolving. There are many counseling and sound-therapy approaches that some patients find helpful, but well-controlled studies are needed to measure their long-term efficacy and to test new approaches. Conclusions Hyperacusis can make life difficult in this increasingly noisy world, forcing sufferers to dramatically alter their work and social habits. We believe this is an opportune time to explore approaches to better understand and treat hyperacusis.


2001 ◽  
Vol 26 (1) ◽  
pp. 37-49 ◽  
Author(s):  
Mark Carter ◽  
Julie Grunsell

This review examines research studies that utilize the behavior chain interruption strategy (BCIS) to teach communication skills to individuals with severe disabilities. The BCIS is a naturalistic teaching procedure that uses an interruption to a behavior chain (i.e., a routine) as the point of instruction. The BCIS has been successfully applied to the teaching of communication skills to individuals across a wide range of ages and of levels of disability, including learners with multiple disabilities. It has been employed to teach a range of communication forms, including pictorial communication systems, natural gestures, signing, and a switch activated communication device. However, a number of questions remain regarding the BCIS. In particular, it is questioned whether the type of interruptions employed in the procedure are likely to occur outside a training context and whether communication taught with the procedure generalizes to out-of-routine contexts. Implications for practice are considered and suggestions are offered for future research.


2020 ◽  
Vol 50 (10) ◽  
pp. 1585-1597 ◽  
Author(s):  
Alexandre Haroche ◽  
Jonathan Rogers ◽  
Marion Plaze ◽  
Raphaël Gaillard ◽  
Steve CR Williams ◽  
...  

AbstractBackgroundCatatonia is a frequent, complex and severe identifiable syndrome of motor dysregulation. However, its pathophysiology is poorly understood.MethodsWe aimed to provide a systematic review of all brain imaging studies (both structural and functional) in catatonia.ResultsWe identified 137 case reports and 18 group studies representing 186 individual patients with catatonia. Catatonia is often associated with brain imaging abnormalities (in more than 75% of cases). The majority of the case reports show diffuse lesions of white matter, in a wide range of brain regions. Most of the case reports of functional imaging usually show frontal, temporal, or basal ganglia hypoperfusion. These abnormalities appear to be alleviated after successful treatment of clinical symptoms. Structural brain magnetic resonance imaging studies are very scarce in the catatonia literature, mostly showing diffuse cerebral atrophy. Group studies assessing functional brain imaging after catatonic episodes show that emotional dysregulation is related to the GABAergic system, with hypoactivation of orbitofrontal cortex, hyperactivation of median prefrontal cortex, and dysconnectivity between frontal and motor areas.ConclusionIn catatonia, brain imaging is abnormal in the majority of cases, and abnormalities more frequently diffuse than localised. Brain imaging studies published so far suffer from serious limitations and for now the different models presented in the literature do not explain most of the cases. There is an important need for further studies including a better clinical characterisation of patients with catatonia, functional imaging with concurrent catatonic symptoms and the use of novel brain imaging techniques.


2017 ◽  
Vol 31 (8) ◽  
pp. 959-966 ◽  
Author(s):  
Andrew C Parrott ◽  
Luke A Downey ◽  
Carl A Roberts ◽  
Cathy Montgomery ◽  
Raimondo Bruno ◽  
...  

Aims: The purpose of this article is to debate current understandings about the psychobiological effects of recreational 3,4-methylenedioxymethamphetamine (MDMA or ‘ecstasy’), and recommend theoretically-driven topics for future research. Methods: Recent empirical findings, especially those from novel topic areas were reviewed. Potential causes for the high variance often found in group findings were also examined. Results and conclusions: The first empirical reports into psychobiological and psychiatric aspects from the early 1990s concluded that regular users demonstrated some selective psychobiological deficits, for instance worse declarative memory, or heightened depression. More recent research has covered a far wider range of psychobiological functions, and deficits have emerged in aspects of vision, higher cognitive skill, neurohormonal functioning, and foetal developmental outcomes. However, variance levels are often high, indicating that while some recreational users develop problems, others are less affected. Potential reasons for this high variance are debated. An explanatory model based on multi-factorial causation is then proposed. Future directions: A number of theoretically driven research topics are suggested, in order to empirically investigate the potential causes for these diverse psychobiological deficits. Future neuroimaging studies should study the practical implications of any serotonergic and/or neurohormonal changes, using a wide range of functional measures.


2019 ◽  
Vol 50 (4) ◽  
pp. 693-702 ◽  
Author(s):  
Christine Holyfield ◽  
Sydney Brooks ◽  
Allison Schluterman

Purpose Augmentative and alternative communication (AAC) is an intervention approach that can promote communication and language in children with multiple disabilities who are beginning communicators. While a wide range of AAC technologies are available, little is known about the comparative effects of specific technology options. Given that engagement can be low for beginning communicators with multiple disabilities, the current study provides initial information about the comparative effects of 2 AAC technology options—high-tech visual scene displays (VSDs) and low-tech isolated picture symbols—on engagement. Method Three elementary-age beginning communicators with multiple disabilities participated. The study used a single-subject, alternating treatment design with each technology serving as a condition. Participants interacted with their school speech-language pathologists using each of the 2 technologies across 5 sessions in a block randomized order. Results According to visual analysis and nonoverlap of all pairs calculations, all 3 participants demonstrated more engagement with the high-tech VSDs than the low-tech isolated picture symbols as measured by their seconds of gaze toward each technology option. Despite the difference in engagement observed, there was no clear difference across the 2 conditions in engagement toward the communication partner or use of the AAC. Conclusions Clinicians can consider measuring engagement when evaluating AAC technology options for children with multiple disabilities and should consider evaluating high-tech VSDs as 1 technology option for them. Future research must explore the extent to which differences in engagement to particular AAC technologies result in differences in communication and language learning over time as might be expected.


2015 ◽  
Vol 25 (1) ◽  
pp. 15-23 ◽  
Author(s):  
Ryan W. McCreery ◽  
Elizabeth A. Walker ◽  
Meredith Spratford

The effectiveness of amplification for infants and children can be mediated by how much the child uses the device. Existing research suggests that establishing hearing aid use can be challenging. A wide range of factors can influence hearing aid use in children, including the child's age, degree of hearing loss, and socioeconomic status. Audiological interventions, including using validated prescriptive approaches and verification, performing on-going training and orientation, and communicating with caregivers about hearing aid use can also increase hearing aid use by infants and children. Case examples are used to highlight the factors that influence hearing aid use. Potential management strategies and future research needs are also discussed.


2009 ◽  
Vol 23 (4) ◽  
pp. 191-198 ◽  
Author(s):  
Suzannah K. Helps ◽  
Samantha J. Broyd ◽  
Christopher J. James ◽  
Anke Karl ◽  
Edmund J. S. Sonuga-Barke

Background: The default mode interference hypothesis ( Sonuga-Barke & Castellanos, 2007 ) predicts (1) the attenuation of very low frequency oscillations (VLFO; e.g., .05 Hz) in brain activity within the default mode network during the transition from rest to task, and (2) that failures to attenuate in this way will lead to an increased likelihood of periodic attention lapses that are synchronized to the VLFO pattern. Here, we tested these predictions using DC-EEG recordings within and outside of a previously identified network of electrode locations hypothesized to reflect DMN activity (i.e., S3 network; Helps et al., 2008 ). Method: 24 young adults (mean age 22.3 years; 8 male), sampled to include a wide range of ADHD symptoms, took part in a study of rest to task transitions. Two conditions were compared: 5 min of rest (eyes open) and a 10-min simple 2-choice RT task with a relatively high sampling rate (ISI 1 s). DC-EEG was recorded during both conditions, and the low-frequency spectrum was decomposed and measures of the power within specific bands extracted. Results: Shift from rest to task led to an attenuation of VLFO activity within the S3 network which was inversely associated with ADHD symptoms. RT during task also showed a VLFO signature. During task there was a small but significant degree of synchronization between EEG and RT in the VLFO band. Attenuators showed a lower degree of synchrony than nonattenuators. Discussion: The results provide some initial EEG-based support for the default mode interference hypothesis and suggest that failure to attenuate VLFO in the S3 network is associated with higher synchrony between low-frequency brain activity and RT fluctuations during a simple RT task. Although significant, the effects were small and future research should employ tasks with a higher sampling rate to increase the possibility of extracting robust and stable signals.


2020 ◽  
Vol 48 (3-4) ◽  
pp. 13-26
Author(s):  
Brandon W. Hawk

Literature written in England between about 500 and 1100 CE attests to a wide range of traditions, although it is clear that Christian sources were the most influential. Biblical apocrypha feature prominently across this corpus of literature, as early English authors clearly relied on a range of extra-biblical texts and traditions related to works under the umbrella of what have been called “Old Testament Pseudepigrapha” and “New Testament/Christian Apocrypha." While scholars of pseudepigrapha and apocrypha have long trained their eyes upon literature from the first few centuries of early Judaism and early Christianity, the medieval period has much to offer. This article presents a survey of significant developments and key threads in the history of scholarship on apocrypha in early medieval England. My purpose is not to offer a comprehensive bibliography, but to highlight major studies that have focused on the transmission of specific apocrypha, contributed to knowledge about medieval uses of apocrypha, and shaped the field from the nineteenth century up to the present. Bringing together major publications on the subject presents a striking picture of the state of the field as well as future directions.


2020 ◽  
Author(s):  
Anna Gerlicher ◽  
Merel Kindt

A cue that indicates imminent threat elicits a wide range of physiological, hormonal, autonomic, cognitive, and emotional fear responses in humans and facilitates threat-specific avoidance behavior. The occurrence of a threat cue can, however, also have general motivational effects and affect behavior. That is, the encounter with a threat cue can increase our tendency to engage in general avoidance behavior that does neither terminate nor prevent the threat-cue or the threat itself. Furthermore, the encounter with a threat-cue can substantially reduce our likelihood to engage in behavior that leads to rewarding outcomes. Such general motivational effects of threat-cues on behavior can be informative about the transition from normal to pathological anxiety and could also explain the development of comorbid disorders, such as depression and substance abuse. Despite the unmistakable relevance of the motivational effects of threat for our understanding of anxiety disorders, their investigation is still in its infancy. Pavlovian-to-Instrumental transfer is one paradigm that allows us to investigate such motivational effects of threat cues. Here, we review studies investigating aversive transfer in humans and discuss recent results on the neural circuits mediating Pavlovian-to-Instrumental transfer effects. Finally, we discuss potential limitations of the transfer paradigm and future directions for employing Pavlovian-to-Instrumental transfer for the investigation of motivational effects of fear and anxiety.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


Sign in / Sign up

Export Citation Format

Share Document