The predictive skill of neural network models for the large-scale dynamics of the multi-level Lorenz '96 systems

2021 ◽  
Author(s):  
Seoleun Shin
1997 ◽  
pp. 931-935 ◽  
Author(s):  
Anders Lansner ◽  
Örjan Ekeberg ◽  
Erik Fransén ◽  
Per Hammarlund ◽  
Tomas Wilhelmsson

2018 ◽  
Vol 7 (3.15) ◽  
pp. 95 ◽  
Author(s):  
M Zabir ◽  
N Fazira ◽  
Zaidah Ibrahim ◽  
Nurbaity Sabri

This paper aims to evaluate the accuracy performance of pre-trained Convolutional Neural Network (CNN) models, namely AlexNet and GoogLeNet accompanied by one custom CNN. AlexNet and GoogLeNet have been proven for their good capabilities as these network models had entered ImageNet Large Scale Visual Recognition Challenge (ILSVRC) and produce relatively good results. The evaluation results in this research are based on the accuracy, loss and time taken of the training and validation processes. The dataset used is Caltech101 by California Institute of Technology (Caltech) that contains 101 object categories. The result reveals that custom CNN architecture produces 91.05% accuracy whereas AlexNet and GoogLeNet achieve similar accuracy which is 99.65%. GoogLeNet consistency arrives at an early training stage and provides minimum error function compared to the other two models. 


Author(s):  
Ratish Puduppully ◽  
Li Dong ◽  
Mirella Lapata

Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model1 outperforms strong baselines improving the state-of-the-art on the recently released RotoWIRE dataset.


2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Bijaya Ketan Panigrahi

<div>The slow-convergence problem degrades the segmentation performance of the recently proposed Quantum-Inspired Self-supervised Neural Network models owing to lack of suitable tailoring of the inter-connection weights. Hence, incorporation of quantum-inspired meta-heuristics in the Quantum-Inspired Self-supervised Neural Network models optimizes their hyper-parameters and inter-connection weights. This paper is aimed at proposing an optimized version of a Quantum-Inspired Self-supervised Neural Network (QIS-Net) model for optimal</div><div>segmentation of brain Magnetic Resonance (MR) Imaging. The suggested Optimized Quantum-Inspired Self-supervised Neural Network (Opti-QISNet) model resembles the architecture of QIS-Net and its operations are leveraged to obtain optimal segmentation outcome. The optimized activation function employed in the presented model is referred to as Quantum-Inspired Optimized Multi-Level Sigmoidal (Opti-QSig) activation. The Opti-QSig activation function is optimized by three quantum-inspired meta-heuristics with fifitness evaluation using Otsu’s multi-level thresholding. Rigorous experiments have been conducted on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data repository. The experimental outcomes show that the proposed self-supervised Opti-QISNet model offffers a promising alternative to the deeply supervised neural network based architectures (UNet and FCNNs) in medical image segmentation and outperforms our recently developed models QIBDS Net and QIS-Net.</div>


2020 ◽  
Author(s):  
Debanjan Konar ◽  
Siddhartha Bhattacharyya ◽  
Bijaya Ketan Panigrahi

<div>The slow-convergence problem degrades the segmentation performance of the recently proposed Quantum-Inspired Self-supervised Neural Network models owing to lack of suitable tailoring of the inter-connection weights. Hence, incorporation of quantum-inspired meta-heuristics in the Quantum-Inspired Self-supervised Neural Network models optimizes their hyper-parameters and inter-connection weights. This paper is aimed at proposing an optimized version of a Quantum-Inspired Self-supervised Neural Network (QIS-Net) model for optimal</div><div>segmentation of brain Magnetic Resonance (MR) Imaging. The suggested Optimized Quantum-Inspired Self-supervised Neural Network (Opti-QISNet) model resembles the architecture of QIS-Net and its operations are leveraged to obtain optimal segmentation outcome. The optimized activation function employed in the presented model is referred to as Quantum-Inspired Optimized Multi-Level Sigmoidal (Opti-QSig) activation. The Opti-QSig activation function is optimized by three quantum-inspired meta-heuristics with fifitness evaluation using Otsu’s multi-level thresholding. Rigorous experiments have been conducted on Dynamic Susceptibility Contrast (DSC) brain MR images from Nature data repository. The experimental outcomes show that the proposed self-supervised Opti-QISNet model offffers a promising alternative to the deeply supervised neural network based architectures (UNet and FCNNs) in medical image segmentation and outperforms our recently developed models QIBDS Net and QIS-Net.</div>


Author(s):  
James C Knight ◽  
Thomas Nowotny

AbstractLarge-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections which, in simulations, are each typically charaterized by at least one floating-point value. This amounts to several terabytes of data – an unrealistic memory requirement for a single desktop machine. Large models are therefore typically simulated on distributed supercomputers which is costly and limits large-scale modelling to a few privileged research groups. In this work, we describe extensions to GeNN – our Graphical Processing Unit (GPU) accelerated spiking neural network simulator – that enable it to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered, instead of storing and retrieving them from memory. We find that GPUs are well-suited to this approach because of their raw computational power which, due to memory bandwidth limitations, is often under-utilised when simulating spiking neural networks. We demonstrate the value of our approach with a recent model of the Macaque visual cortex consisting of 4.13 × 106 neurons and 24.2 × 109 synapses. Using our new method, it can be simulated on a single GPU – a significant step forward in making large-scale brain modelling accessible to many more researchers. Our results match those obtained on a supercomputer and the simulation runs up to 35 % faster on a single high-end GPU than previously on over 1000 supercomputer nodes.


2021 ◽  
Vol 24 (3) ◽  
pp. 1-21
Author(s):  
Rafael Veras ◽  
Christopher Collins ◽  
Julie Thorpe

In this article, we present a thorough evaluation of semantic password grammars. We report multifactorial experiments that test the impact of sample size, probability smoothing, and linguistic information on password cracking. The semantic grammars are compared with state-of-the-art probabilistic context-free grammar ( PCFG ) and neural network models, and tested in cross-validation and A vs. B scenarios. We present results that reveal the contributions of part-of-speech (syntactic) and semantic patterns, and suggest that the former are more consequential to the security of passwords. Our results show that in many cases PCFGs are still competitive models compared to their latest neural network counterparts. In addition, we show that there is little performance gain in training PCFGs with more than 1 million passwords. We present qualitative analyses of four password leaks (Mate1, 000webhost, Comcast, and RockYou) based on trained semantic grammars, and derive graphical models that capture high-level dependencies between token classes. Finally, we confirm the similarity inferences from our qualitative analysis by examining the effectiveness of grammars trained and tested on all pairs of leaks.


Author(s):  
I. O. Lymariev ◽  
S. A. Subbotin ◽  
A. A. Oliinyk ◽  
I. V. Drokin

2020 ◽  
Author(s):  
Matthew G. Perich ◽  
Charlotte Arlt ◽  
Sofia Soares ◽  
Megan E. Young ◽  
Clayton P. Mosher ◽  
...  

ABSTRACTBehavior arises from the coordinated activity of numerous anatomically and functionally distinct brain regions. Modern experimental tools allow unprecedented access to large neural populations spanning many interacting regions brain-wide. Yet, understanding such large-scale datasets necessitates both scalable computational models to extract meaningful features of interregion communication and principled theories to interpret those features. Here, we introduce Current-Based Decomposition (CURBD), an approach for inferring brain-wide interactions using data-constrained recurrent neural network models that directly reproduce experimentally-obtained neural data. CURBD leverages the functional interactions inferred by such models to reveal directional currents between multiple brain regions. We first show that CURBD accurately isolates inter-region currents in simulated networks with known dynamics. We then apply CURBD to multi-region neural recordings obtained from mice during running, macaques during Pavlovian conditioning, and humans during memory retrieval to demonstrate the widespread applicability of CURBD to untangle brain-wide interactions underlying behavior from a variety of neural datasets.


Sign in / Sign up

Export Citation Format

Share Document