scholarly journals A Novel Approach to Pre-Impact Measurement from Impact Investing Using Random Forest and Deep Neural Networks

2021 ◽  
Vol 183 (20) ◽  
pp. 21-29
Author(s):  
Emmanuel Kwesi Baah ◽  
James Ben Hayfron-Acquah ◽  
Dominic Asamoah
2020 ◽  
Vol 131 ◽  
pp. 205-212
Author(s):  
Bikash Santra ◽  
Angshuman Paul ◽  
Dipti Prasad Mukherjee

2019 ◽  
Author(s):  
David Beniaguev ◽  
Idan Segev ◽  
Michael London

AbstractWe introduce a novel approach to study neurons as sophisticated I/O information processing units by utilizing recent advances in the field of machine learning. We trained deep neural networks (DNNs) to mimic the I/O behavior of a detailed nonlinear model of a layer 5 cortical pyramidal cell, receiving rich spatio-temporal patterns of input synapse activations. A Temporally Convolutional DNN (TCN) with seven layers was required to accurately, and very efficiently, capture the I/O of this neuron at the millisecond resolution. This complexity primarily arises from local NMDA-based nonlinear dendritic conductances. The weight matrices of the DNN provide new insights into the I/O function of cortical pyramidal neurons, and the approach presented can provide a systematic characterization of the functional complexity of different neuron types. Our results demonstrate that cortical neurons can be conceptualized as multi-layered “deep” processing units, implying that the cortical networks they form have a non-classical architecture and are potentially more computationally powerful than previously assumed.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0239007
Author(s):  
Aixia Guo ◽  
Sakima Smith ◽  
Yosef M. Khan ◽  
James R. Langabeer II ◽  
Randi E. Foraker

Background Cardiac dysrhythmias (CD) affect millions of Americans in the United States (US), and are associated with considerable morbidity and mortality. New strategies to combat this growing problem are urgently needed. Objectives Predicting CD using electronic health record (EHR) data would allow for earlier diagnosis and treatment of the condition, thus improving overall cardiovascular outcomes. The Guideline Advantage (TGA) is an American Heart Association ambulatory quality clinical data registry of EHR data representing 70 clinics distributed throughout the US, and has been used to monitor outpatient prevention and disease management outcome measures across populations and for longitudinal research on the impact of preventative care. Methods For this study, we represented all time-series cardiovascular health (CVH) measures and the corresponding data collection time points for each patient by numerical embedding vectors. We then employed a deep learning technique–long-short term memory (LSTM) model–to predict CD from the vector of time-series CVH measures by 5-fold cross validation and compared the performance of this model to the results of deep neural networks, logistic regression, random forest, and Naïve Bayes models. Results We demonstrated that the LSTM model outperformed other traditional machine learning models and achieved the best prediction performance as measured by the average area under the receiver operator curve (AUROC): 0.76 for LSTM, 0.71 for deep neural networks, 0.66 for logistic regression, 0.67 for random forest, and 0.59 for Naïve Bayes. The most influential feature from the LSTM model were blood pressure. Conclusions These findings may be used to prevent CD in the outpatient setting by encouraging appropriate surveillance and management of CVH.


2020 ◽  
Vol 34 (04) ◽  
pp. 4272-4279
Author(s):  
Ayush Jaiswal ◽  
Daniel Moyer ◽  
Greg Ver Steeg ◽  
Wael AbdAlmageed ◽  
Premkumar Natarajan

We propose a novel approach to achieving invariance for deep neural networks in the form of inducing amnesia to unwanted factors of data through a new adversarial forgetting mechanism. We show that the forgetting mechanism serves as an information-bottleneck, which is manipulated by the adversarial training to learn invariance to unwanted factors. Empirical results show that the proposed framework achieves state-of-the-art performance at learning invariance in both nuisance and bias settings on a diverse collection of datasets and tasks.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 560
Author(s):  
Shrihari Vasudevan

This paper demonstrates a novel approach to training deep neural networks using a Mutual Information (MI)-driven, decaying Learning Rate (LR), Stochastic Gradient Descent (SGD) algorithm. MI between the output of the neural network and true outcomes is used to adaptively set the LR for the network, in every epoch of the training cycle. This idea is extended to layer-wise setting of LR, as MI naturally provides a layer-wise performance metric. A LR range test determining the operating LR range is also proposed. Experiments compared this approach with popular alternatives such as gradient-based adaptive LR algorithms like Adam, RMSprop, and LARS. Competitive to better accuracy outcomes obtained in competitive to better time, demonstrate the feasibility of the metric and approach.


Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 25
Author(s):  
Álvaro S. Hervella ◽  
Lucía Ramos ◽  
José Rouco ◽  
Jorge Novo ◽  
Marcos Ortega

The analysis of the optic disc and cup in retinal images is important for the early diagnosis of glaucoma. In order to improve the joint segmentation of these relevant retinal structures, we propose a novel approach applying the self-supervised multimodal reconstruction of retinal images as pre-training for deep neural networks. The proposed approach is evaluated on different public datasets. The obtained results indicate that the self-supervised multimodal reconstruction pre-training improves the performance of the segmentation. Thus, the proposed approach presents a great potential for also improving the interpretable diagnosis of glaucoma.


Author(s):  
Satoru Watanabe ◽  
Hayato Yamana

AbstractThe inner representation of deep neural networks (DNNs) is indecipherable, which makes it difficult to tune DNN models, control their training process, and interpret their outputs. In this paper, we propose a novel approach to investigate the inner representation of DNNs through topological data analysis (TDA). Persistent homology (PH), one of the outstanding methods in TDA, was employed for investigating the complexities of trained DNNs. We constructed clique complexes on trained DNNs and calculated the one-dimensional PH of DNNs. The PH reveals the combinational effects of multiple neurons in DNNs at different resolutions, which is difficult to be captured without using PH. Evaluations were conducted using fully connected networks (FCNs) and networks combining FCNs and convolutional neural networks (CNNs) trained on the MNIST and CIFAR-10 data sets. Evaluation results demonstrate that the PH of DNNs reflects both the excess of neurons and problem difficulty, making PH one of the prominent methods for investigating the inner representation of DNNs.


Sign in / Sign up

Export Citation Format

Share Document