scholarly journals Modeling Neural Variability in Deep Networks with Dropout

2021 ◽  
Author(s):  
Xu Pan ◽  
Ruben Coen-Cagli ◽  
Odelia Schwartz

ABSTRACTConvolutional neural networks (CNNs) have been used to model the biological visual system. Compared to other models, CNNs can better capture neural responses to natural stimuli. However, previous successes are limited to modeling mean responses; while another fundamental aspect of cortical activity, namely response variability, is ignored. How the CNN models capture neural variability properties remains unknown. Previous computational neuroscience studies showed that the response variability can have a functional role, and found that the correlation structure (especially noise correlation) influences the amount of information in the population code. However, CNN models are typically deterministic, so noise (and correlations) in CNN models have not been studied. In this study, we developed a CNN model of visual cortex that includes neural variability. The model includes Monte Carlo dropout, namely a random subset of units is silenced at each presentation of the input image, inducing variability in the model. We found that our model captured a wide-range of neural variability findings in electrophysiology experiments, including that response mean and variance scale together, noise correlations are small but positive on average, both evoked and spontaneous noise correlation are larger for neurons with similar tuning, and the noise covariance is low-dimensional. Further, we found that removing the correlation can boost trial-by-trial decoding performance in the CNN model.

These volumes contain the proceedings of the conference held at Aarhus, Oxford and Madrid in September 2016 to mark the seventieth birthday of Nigel Hitchin, one of the world’s foremost geometers and Savilian Professor of Geometry at Oxford. The proceedings contain twenty-nine articles, including three by Fields medallists (Donaldson, Mori and Yau). The articles cover a wide range of topics in geometry and mathematical physics, including the following: Riemannian geometry, geometric analysis, special holonomy, integrable systems, dynamical systems, generalized complex structures, symplectic and Poisson geometry, low-dimensional topology, algebraic geometry, moduli spaces, Higgs bundles, geometric Langlands programme, mirror symmetry and string theory. These volumes will be of interest to researchers and graduate students both in geometry and mathematical physics.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 421
Author(s):  
Dariusz Puchala ◽  
Kamil Stokfiszewski ◽  
Mykhaylo Yatsymirskyy

In this paper, the authors analyze in more details an image encryption scheme, proposed by the authors in their earlier work, which preserves input image statistics and can be used in connection with the JPEG compression standard. The image encryption process takes advantage of fast linear transforms parametrized with private keys and is carried out prior to the compression stage in a way that does not alter those statistical characteristics of the input image that are crucial from the point of view of the subsequent compression. This feature makes the encryption process transparent to the compression stage and enables the JPEG algorithm to maintain its full compression capabilities even though it operates on the encrypted image data. The main advantage of the considered approach is the fact that the JPEG algorithm can be used without any modifications as a part of the encrypt-then-compress image processing framework. The paper includes a detailed mathematical model of the examined scheme allowing for theoretical analysis of the impact of the image encryption step on the effectiveness of the compression process. The combinatorial and statistical analysis of the encryption process is also included and it allows to evaluate its cryptographic strength. In addition, the paper considers several practical use-case scenarios with different characteristics of the compression and encryption stages. The final part of the paper contains the additional results of the experimental studies regarding general effectiveness of the presented scheme. The results show that for a wide range of compression ratios the considered scheme performs comparably to the JPEG algorithm alone, that is, without the encryption stage, in terms of the quality measures of reconstructed images. Moreover, the results of statistical analysis as well as those obtained with generally approved quality measures of image cryptographic systems, prove high strength and efficiency of the scheme’s encryption stage.


Author(s):  
Hung Phuoc Truong ◽  
Thanh Phuong Nguyen ◽  
Yong-Guk Kim

AbstractWe present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.


2014 ◽  
Vol 1044-1045 ◽  
pp. 1049-1052 ◽  
Author(s):  
Chin Chen Chang ◽  
I Ta Lee ◽  
Tsung Ta Ke ◽  
Wen Kai Tai

Common methods for reducing image size include scaling and cropping. However, these two approaches have some quality problems for reduced images. In this paper, we propose an image reducing algorithm by separating the main objects and the background. First, we extract two feature maps, namely, an enhanced visual saliency map and an improved gradient map from an input image. After that, we integrate these two feature maps to an importance map. Finally, we generate the target image using the importance map. The proposed approach can obtain desired results for a wide range of images.


Author(s):  
Fenxiao Chen ◽  
Yun-Cheng Wang ◽  
Bin Wang ◽  
C.-C. Jay Kuo

Abstract Research on graph representation learning has received great attention in recent years since most data in real-world applications come in the form of graphs. High-dimensional graph data are often in irregular forms. They are more difficult to analyze than image/video/audio data defined on regular lattices. Various graph embedding techniques have been developed to convert the raw graph data into a low-dimensional vector representation while preserving the intrinsic graph properties. In this review, we first explain the graph embedding task and its challenges. Next, we review a wide range of graph embedding techniques with insights. Then, we evaluate several stat-of-the-art methods against small and large data sets and compare their performance. Finally, potential applications and future directions are presented.


2020 ◽  
Vol 32 (8) ◽  
pp. 1448-1498 ◽  
Author(s):  
Alexandre René ◽  
André Longtin ◽  
Jakob H. Macke

Understanding how rich dynamics emerge in neural populations requires models exhibiting a wide range of behaviors while remaining interpretable in terms of connectivity and single-neuron dynamics. However, it has been challenging to fit such mechanistic spiking networks at the single-neuron scale to empirical population data. To close this gap, we propose to fit such data at a mesoscale, using a mechanistic but low-dimensional and, hence, statistically tractable model. The mesoscopic representation is obtained by approximating a population of neurons as multiple homogeneous pools of neurons and modeling the dynamics of the aggregate population activity within each pool. We derive the likelihood of both single-neuron and connectivity parameters given this activity, which can then be used to optimize parameters by gradient ascent on the log likelihood or perform Bayesian inference using Markov chain Monte Carlo (MCMC) sampling. We illustrate this approach using a model of generalized integrate-and-fire neurons for which mesoscopic dynamics have been previously derived and show that both single-neuron and connectivity parameters can be recovered from simulated data. In particular, our inference method extracts posterior correlations between model parameters, which define parameter subsets able to reproduce the data. We compute the Bayesian posterior for combinations of parameters using MCMC sampling and investigate how the approximations inherent in a mesoscopic population model affect the accuracy of the inferred single-neuron parameters.


1992 ◽  
Vol 6 (4) ◽  
pp. 561-580
Author(s):  
C. H. Hesse

This paper deals with the two-dimensional stochastic process (X(t), V(t)) where dX(t) = V(t)dt, V(t) = W(t) + ν for some constant ν and W(t) is a one-dimensional Wiener process with zero mean and variance parameter σ2= 1. We are interested in the first-passage time of (X(t), V(t)) to the plane X = 0 for a process starting from (X(0) = −x, V(0) = ν) with x > 0. The partial differential equation for the Laplace transform of the first-passage time density is transformed into a Schrödinger-type equation and, using methods of global analysis, such as the method of dominant balance, an approximation to the first-passage density is obtained. In a series of simulations, the quality of this approximation is checked. Over a wide range of x and ν it is found to perform well, globally in t. Some applications are mentioned.


Author(s):  
Amitabha Mukerjee ◽  
Madan Mohan Dabbeeru

In the widespread endeavour to standardize a vocabulary for design, the semantics for the terms, especially at the detailed levels, are often defined based on the exigencies of the implementation. In human usage, each symbol has a wide range of associations, and any attempt at definition will miss many of these, resulting in brittleness. Human flexibility in symbol usage is possible because our symbols are learned from a vast experience of the world. Here we propose the very first steps towards a process by which CAD systems may acquire symbols is by learning usage patterns or image schemas grounded on experience. Subsequently, more abstract symbols may be derived based on these grounded symbols, which thereby retain the flexibility inherent in a learning system. In many design tasks, the “good designs” lie along regions that can be mapped to lower dimensional surfaces or manifolds, owing to latent interdependencies between the variables. These low-dimensional structures (sometimes called chunks) may constitute the intermediate step between the raw experience and the eventual symbol that arises after these patterns become stabilized through communication. In a multi-functional design scenario, we use a locally linear embedding (LLE) to discover these manifolds, which are compact descriptions for the space of “good designs”. We illustrate the approach with a simple 2-parameter latch-and-bolt design, and with a 8-parameter universal motor.


2015 ◽  
Vol 3 (2) ◽  
pp. T93-T107
Author(s):  
Richard S. Bishop

A fundamental aspect of prospect evaluation is whether the trap volume or the charge volume limits the volume of trapped hydrocarbons. Traps filled to a leak point are full traps, although I rarely describe them as such. I commonly say “full to spill” but rarely do I hear “full to a leak point.” Why not? A summary of literature from fault leakage, seeps, field studies, and theoretical source-yield calculations illustrates the implication that source overcharge (i.e., the charge exceeding the trap volume) occurs in basins that vary widely in age and tectonic setting. Perhaps surprisingly, this is true for oil and gas fields and for a wide range of source rock quality from rich to lean. The most obvious implication from source overcharge is that the volume of trapped hydrocarbons is limited by the absolute volume of the trap. Less obvious is the recognition that if oil and free gas are available to a trap, gas will displace the oil. Thus, if there are no gas leaks, the trap will contain only gas. If there is preferential leakage of gas, then the trap may contain a gas cap and an oil leg. Furthermore, the occurrence of oils saturated with gas likely indicates selective leakage of free gas. Hydrocarbon contacts (whether oil-water, gas-oil, or gas-water) are interpreted to define the leak or spill point or seal capacity. Thus, instead of using continuous statistical distributions to describe all elements of traps, some elements such as area are more appropriately described as discrete values and a full assessment may be a combination of discrete plus continuous statistical distributions. Overcharge may also lead to different interpretations of risk. Interpreting the trap volume, particularly with leak points, leads to the notion that risk evaluation might consider the number and quality of potential leak points.


2006 ◽  
Vol 18 (3) ◽  
pp. 634-659 ◽  
Author(s):  
Alexander Lerchner ◽  
Cristina Ursta ◽  
John Hertz ◽  
Mandana Ahmadi ◽  
Pauline Ruffiot ◽  
...  

We study the spike statistics of neurons in a network with dynamically balanced excitation and inhibition. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons, driven by excitatory input from an external population. The high connectivity permits a mean field description in which synaptic currents can be treated as gaussian noise, the mean and autocorrelation function of which are calculated self-consistently from the firing statistics of single model neurons. Within this description, a wide range of Fano factors is possible. We find that the irregularity of spike trains is controlled mainly by the strength of the synapses relative to the difference between the firing threshold and the postfiring reset level of the membrane potential. For moderately strong synapses, we find spike statistics very similar to those observed in primary visual cortex.


Sign in / Sign up

Export Citation Format

Share Document