weight scaling
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 6)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
Srinivasu Polinati ◽  
Durga Prasad Bavirisetti ◽  
Kandala N V P S Rajesh ◽  
Ravindra Dhuli

Objective: The objective of any multimodal medical image fusion algorithm is to assist a radiologist for better decision-making during the diagnosis and therapy by integrating the anatomical (magnetic resonance imaging) and functional (positron emission tomography/single-photon emission computed tomography) information. Methods: We proposed a new medical image fusion method based on content-based decomposition, principal component analysis (PCA), and sigmoid function. We considered empirical wavelet transform (EWT) for content-based decomposition purposes since it can preserve crucial medical image information such as edges and corners. PCA is used to obtain initial weights corresponding to each detail layer. Results: In our experiments, we found that direct usage of PCA for detail layer fusion introduces severe artifacts into the fused image due to weight scaling issues. In order to tackle this, we considered using the sigmoid function for better weight scaling. We considered 24 pairs of MRI-PET and 24 pairs of MRI-SPECT images for fusion and the results are measured using four significant quantitative metrics. Conclusion: Finally, we compared our proposed method with other state-of-the-art transform-based fusion approaches, using traditional and recent performance measures. An appreciable improvement is observed in both qualitative and quantitative results compared to other fusion methods.


2020 ◽  
Vol 68 (10) ◽  
pp. 6101-6113
Author(s):  
Abdullahi Mohammad ◽  
Christos Masouros ◽  
Yiannis Andreopoulos

2020 ◽  
Vol 32 (1) ◽  
pp. 153-181
Author(s):  
Anthony Strock ◽  
Xavier Hinaut ◽  
Nicolas P. Rougier

Gated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a nonlinearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyperparameters for which the results hold (e.g., number of neurons, sparsity, global weight scaling) such that any large enough population, mixing excitatory and inhibitory neurons, can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counterintuitive electrophysiological recordings.


2019 ◽  
Author(s):  
Anthony Strock ◽  
Xavier Hinaut ◽  
Nicolas P. Rougier

AbstractGated working memory is defined as the capacity of holding arbitrary information at any time in order to be used at a later time. Based on electrophysiological recordings, several computational models have tackled the problem using dedicated and explicit mechanisms. We propose instead to consider an implicit mechanism based on a random recurrent neural network. We introduce a robust yet simple reservoir model of gated working memory with instantaneous updates. The model is able to store an arbitrary real value at random time over an extended period of time. The dynamics of the model is a line attractor that learns to exploit reentry and a non-linearity during the training phase using only a few representative values. A deeper study of the model shows that there is actually a large range of hyper parameters for which the results hold (number of neurons, sparsity, global weight scaling, etc.) such that any large enough population, mixing excitatory and inhibitory neurons can quickly learn to realize such gated working memory. In a nutshell, with a minimal set of hypotheses, we show that we can have a robust model of working memory. This suggests this property could be an implicit property of any random population, that can be acquired through learning. Furthermore, considering working memory to be a physically open but functionally closed system, we give account on some counter-intuitive electrophysiological recordings.


2017 ◽  
Vol 26 (6) ◽  
pp. 826-838
Author(s):  
A. DAVIDSON ◽  
A. GANESH

Consider the complete graph on n vertices, with edge weights drawn independently from the exponential distribution with unit mean. Janson showed that the typical distance between two vertices scales as log n/n, whereas the diameter (maximum distance between any two vertices) scales as 3 log n/n. Bollobás, Gamarnik, Riordan and Sudakov showed that, for any fixed k, the weight of the Steiner tree connecting k typical vertices scales as (k − 1)log n/n, which recovers Janson's result for k = 2. We extend this to show that the worst case k-Steiner tree, over all choices of k vertices, has weight scaling as (2k − 1)log n/n and finally, we generalize this result to Steiner trees with a mixture of typical and worst case vertices.


Forests ◽  
2014 ◽  
Vol 5 (9) ◽  
pp. 2289-2306 ◽  
Author(s):  
Jarred Saralecos ◽  
Robert Keefe ◽  
Wade Tinkham ◽  
Randall Brooks ◽  
Alistair Smith ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document