scholarly journals Long-term satellite-based estimates of air quality and premature mortality in Equatorial Asia through deep neural networks

2020 ◽  
Vol 15 (10) ◽  
pp. 104088
Author(s):  
N Bruni Zani ◽  
G Lonati ◽  
M I Mead ◽  
M T Latif ◽  
P Crippa
Author(s):  
Jessica A. F. Thompson

Much of the controversy evoked by the use of deep neural networks as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g., object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review relevant theories of scientific explanation and discuss strategies for unifying the scientific goals of neuroscience and AI.


Author(s):  
Xiuwen Yi ◽  
Zhewen Duan ◽  
Ruiyuan Li ◽  
Junbo Zhang ◽  
Tianrui Li ◽  
...  

2021 ◽  
Author(s):  
Jessica Anne Farrell Thompson

Much of the controversy evoked by the use of deep neural networks (DNNs) as models of biological neural systems amount to debates over what constitutes scientific progress in neuroscience. In order to discuss what constitutes scientific progress, one must have a goal in mind (progress towards what?). One such long term goal is to produce scientific explanations of intelligent capacities (e.g. object recognition, relational reasoning). I argue that the most pressing philosophical questions at the intersection of neuroscience and artificial intelligence are ultimately concerned with defining the phenomena to be explained and with what constitute valid explanations of such phenomena. As such, I propose that a foundation in the philosophy of scientific explanation and understanding can scaffold future discussions about how an integrated science of intelligence might progress. Towards this vision, I review several of the most relevant theories of scientific explanation and begin to outline candidate forms of explanation for neural and cognitive phenomena.


In the first wave of artificial intelligence (AI), rule-based expert systems were developed, with modest success, to help generalists who lacked expertise in a specific domain. The second wave of AI, originally called artificial neural networks but now described as machine learning, began to have an impact with multilayer networks in the 1980s. Deep learning, which enables automated feature discovery, has enjoyed spectacular success in several medical disciplines, including cardiology, from automated image analysis to the identification of the electrocardiographic signature of atrial fibrillation during sinus rhythm. Machine learning is now embedded within the NHS Long-Term Plan in England, but its widespread adoption may be limited by the “black-box” nature of deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document