Using a Speech Perception Neural Network Computer Simulation to Contrast Neuroanatomic versus Neuromodulatory Models of Auditory Hallucinations

2006 ◽  
Vol 39 ◽  
pp. 54-64 ◽  
Author(s):  
R. E. Hoffman ◽  
T. H. McGlashan
1995 ◽  
Vol 7 (4) ◽  
pp. 479-496 ◽  
Author(s):  
Ralph E. Hoffman ◽  
Jill Rapaport ◽  
Rezvan Ameli ◽  
Thomas H. McGlashan ◽  
Diane Harcherik ◽  
...  

The mechanism of hallucinated speech, a symptom commonly reported by schizophrenic patients, is unknown. The hypothesis that these hallucinations arise from pathologically altered working memory underlying speech perception was explored. A neural network computer simulation of contextually guided sequential word detection based on Elman (1990a,b) was studied. Pruning anatomic connections or reducing neuronal activation in working memory caused word “percepts” to emerge spontaneously (i.e., in the absence of external “speech inputs”), thereby providing a model of hallucinated speech. These simulations also demonstrated distinct patterns of word detection impairments when inputs were accompanied by varying levels of noise. In a parallel human study, the ability to shadow noisecontaminated, connected speech was assessed. Schizophrenic patients reporting hallucinated speech demonstrated a pattern of speech perception impairments similar to a simulated neural network with reduced anatomic connectivity and enhanced neuronal activation. Schizophrenic patients not reporting this symptom did not demonstrate these speech perception impairments. Neural network simulations and human empirical data, when considered together, suggested that the primary cause of hallucinated “voices” in schizophrenia is reduced neuroanatomic connectivity in verbal working memory.


2020 ◽  
Vol 75 ◽  
pp. 04018 ◽  
Author(s):  
Serhiy Semerikov ◽  
Illia Teplytskyi ◽  
Yuliia Yechkalo ◽  
Oksana Markova ◽  
Vladimir Soloviev ◽  
...  

The article substantiates the necessity to develop training methods of computer simulation of neural networks in the spreadsheet environment. The systematic review of their application to simulating artificial neural networks is performed. The authors distinguish basic approaches to solving the problem of network computer simulation training in the spreadsheet environment, joint application of spreadsheets and tools of neural network simulation, application of third-party add-ins to spreadsheets, development of macros using the embedded languages of spreadsheets; use of standard spreadsheet add-ins for non-linear optimization, creation of neural networks in the spreadsheet environment with-out add-ins and macros. The article considers ways of building neural network models in cloud-based spreadsheets, Google Sheets. The model is based on the problem of classifying multi-dimensional data provided in “The Use of Multiple Measurements in Taxonomic Problems” by R. A. Fisher. Edgar Anderson’s role in collecting and preparing the data in the 1920s-1930s is discussed as well as some peculiarities of data selection. There are presented data on the method of multi-dimensional data presentation in the form of an ideograph developed by Anderson and considered one of the first efficient ways of data visualization.


Author(s):  
Christopher-John L. Farrell

Abstract Objectives Artificial intelligence (AI) models are increasingly being developed for clinical chemistry applications, however, it is not understood whether human interaction with the models, which may occur once they are implemented, improves or worsens their performance. This study examined the effect of human supervision on an artificial neural network trained to identify wrong blood in tube (WBIT) errors. Methods De-identified patient data for current and previous (within seven days) electrolytes, urea and creatinine (EUC) results were used in the computer simulation of WBIT errors at a rate of 50%. Laboratory staff volunteers reviewed the AI model’s predictions, and the EUC results on which they were based, before making a final decision regarding the presence or absence of a WBIT error. The performance of this approach was compared to the performance of the AI model operating without human supervision. Results Laboratory staff supervised the classification of 510 sets of EUC results. This workflow identified WBIT errors with an accuracy of 81.2%, sensitivity of 73.7% and specificity of 88.6%. However, the AI model classifying these samples autonomously was superior on all metrics (p-values<0.05), including accuracy (92.5%), sensitivity (90.6%) and specificity (94.5%). Conclusions Human interaction with AI models can significantly alter their performance. For computationally complex tasks such as WBIT error identification, best performance may be achieved by autonomously functioning AI models.


2020 ◽  
Vol 14 ◽  
Author(s):  
Stephanie Haro ◽  
Christopher J. Smalt ◽  
Gregory A. Ciccarelli ◽  
Thomas F. Quatieri

Many individuals struggle to understand speech in listening scenarios that include reverberation and background noise. An individual's ability to understand speech arises from a combination of peripheral auditory function, central auditory function, and general cognitive abilities. The interaction of these factors complicates the prescription of treatment or therapy to improve hearing function. Damage to the auditory periphery can be studied in animals; however, this method alone is not enough to understand the impact of hearing loss on speech perception. Computational auditory models bridge the gap between animal studies and human speech perception. Perturbations to the modeled auditory systems can permit mechanism-based investigations into observed human behavior. In this study, we propose a computational model that accounts for the complex interactions between different hearing damage mechanisms and simulates human speech-in-noise perception. The model performs a digit classification task as a human would, with only acoustic sound pressure as input. Thus, we can use the model's performance as a proxy for human performance. This two-stage model consists of a biophysical cochlear-nerve spike generator followed by a deep neural network (DNN) classifier. We hypothesize that sudden damage to the periphery affects speech perception and that central nervous system adaptation over time may compensate for peripheral hearing damage. Our model achieved human-like performance across signal-to-noise ratios (SNRs) under normal-hearing (NH) cochlear settings, achieving 50% digit recognition accuracy at −20.7 dB SNR. Results were comparable to eight NH participants on the same task who achieved 50% behavioral performance at −22 dB SNR. We also simulated medial olivocochlear reflex (MOCR) and auditory nerve fiber (ANF) loss, which worsened digit-recognition accuracy at lower SNRs compared to higher SNRs. Our simulated performance following ANF loss is consistent with the hypothesis that cochlear synaptopathy impacts communication in background noise more so than in quiet. Following the insult of various cochlear degradations, we implemented extreme and conservative adaptation through the DNN. At the lowest SNRs (&lt;0 dB), both adapted models were unable to fully recover NH performance, even with hundreds of thousands of training samples. This implies a limit on performance recovery following peripheral damage in our human-inspired DNN architecture.


Sign in / Sign up

Export Citation Format

Share Document