Double Compressed AMR Audio Detection Using Long-Term Features and Deep Neural Networks

Author(s):  
Aykut Buker ◽  
Cemal Hanilci
Author(s):  
Jessica A. F. Thompson

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.


2021 ◽  
Author(s):  
Jessica Anne Farrell Thompson

Much of the controversy evoked by the use of deep neural networks (DNNs) as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g. object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. As such, I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review several of the most relevant theories of scientific explanation and begin to outline candidate forms of explanation for neural and cognitive phenomena.


In the first wave of artificial intelligence (AI), rule-based expert systems were developed, with modest success, to help generalists who lacked expertise in a specific domain. The second wave of AI, originally called artificial neural networks but now described as machine learning, began to have an impact with multilayer networks in the 1980s. Deep learning, which enables automated feature discovery, has enjoyed spectacular success in several medical disciplines, including cardiology, from automated image analysis to the identification of the electrocardiographic signature of atrial fibrillation during sinus rhythm. Machine learning is now embedded within the NHS Long-Term Plan in England, but its widespread adoption may be limited by the “black-box” nature of deep neural networks.


2019 ◽  
Vol 9 (4) ◽  
pp. 235-245 ◽  
Author(s):  
Apeksha Shewalkar ◽  
Deepika Nyavanandi ◽  
Simone A. Ludwig

Abstract Deep Neural Networks (DNN) are nothing but neural networks with many hidden layers. DNNs are becoming popular in automatic speech recognition tasks which combines a good acoustic with a language model. Standard feedforward neural networks cannot handle speech data well since they do not have a way to feed information from a later layer back to an earlier layer. Thus, Recurrent Neural Networks (RNNs) have been introduced to take temporal dependencies into account. However, the shortcoming of RNNs is that long-term dependencies due to the vanishing/exploding gradient problem cannot be handled. Therefore, Long Short-Term Memory (LSTM) networks were introduced, which are a special case of RNNs, that takes long-term dependencies in a speech in addition to short-term dependencies into account. Similarily, GRU (Gated Recurrent Unit) networks are an improvement of LSTM networks also taking long-term dependencies into consideration. Thus, in this paper, we evaluate RNN, LSTM, and GRU to compare their performances on a reduced TED-LIUM speech data set. The results show that LSTM achieves the best word error rates, however, the GRU optimization is faster while achieving word error rates close to LSTM.


2021 ◽  
Vol 10 (3) ◽  
Author(s):  
Megan Yang ◽  
Leya Joykutty

Under the umbrella of artificial intelligence is machine learning that allows a system to improve through experience without any explicit programs telling it to. It is able to find patterns in massive amounts of data from works, images, numbers, to statistics. One approach to machine learning is neural networks in which the computer learns to finish a task by analyzing training samples. Another approach used in this study is reinforcement learning which manipulates it environment to discover errors and rewards.      This study aimed developed a deep neural network and used reinforcement learning to develop a system that was able to predict whether the cases will increase or decrease, then using that information, was able to predict which actions would most effectively cause a decline in cases while keeping things like economy and education in mind for a better long term effect. These models were made based on Florida using eight different counties’ data including things like mobility, temperature, dates of government actions, etc. Based on this information, data exploration and feature engineering was conducted to add dimensions that would further the accuracy of the neural network. The reinforcement learning model’s actions consisted of first, a shutdown for about two months before reopening schools and allowing things to return to normal. Then interestingly the model decided to keep school operating in a hybrid model with some students going back to school while others continue to study remotely.   


Sign in / Sign up

Export Citation Format

Share Document