scholarly journals How Neuronal Noises Influence the Spiking Neural Networks’s Cognitive Learning Process: A Preliminary Study

2021 ◽  
Vol 11 (2) ◽  
pp. 153
Author(s):  
Jing Liu ◽  
Xu Yang ◽  
Yimeng Zhu ◽  
Yunlin Lei ◽  
Jian Cai ◽  
...  

In neuroscience, the Default Mode Network (DMN), also known as the default network or the default-state network, is a large-scale brain network known to have highly correlated activities that are distinct from other networks in the brain. Many studies have revealed that DMNs can influence other cognitive functions to some extent. This paper is motivated by this idea and intends to further explore on how DMNs could help Spiking Neural Networks (SNNs) on image classification problems through an experimental study. The approach emphasizes the bionic meaning on model selection and parameters settings. For modeling, we select Leaky Integrate-and-Fire (LIF) as the neuron model, Additive White Gaussian Noise (AWGN) as the input DMN, and design the learning algorithm based on Spike-Timing-Dependent Plasticity (STDP). Then, we experiment on a two-layer SNN to evaluate the influence of DMN on classification accuracy, and on a three-layer SNN to examine the influence of DMN on structure evolution, where the results both appear positive. Finally, we discuss possible directions for future works.

Author(s):  
Stephanie R Debats ◽  
Lyndon D Estes ◽  
David R Thompson ◽  
Kelly K Caylor

Sub-Saharan Africa and other developing regions of the world are dominated by smallholder farms, which are characterized by small, heterogeneous, and often indistinct field patterns. In previous work, we developed an algorithm for mapping both smallholder and commercial agricultural fields that includes efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. In this paper, we demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples.


Author(s):  
BOGDAN RADUCANU ◽  
JORDI VITRIÀ

Cognitive development refers to the ability of a system to gradually acquire knowledge through experiences during its existence. As a consequence, the learning strategy should be represented as an integrated, online process that aims to build a model of the "world" and a continuous update of this model. Considering as reference the Modal Model of Memory introduced by Atkinson and Schiffrin, we propose an online learning algorithm for cognitive systems design. The incremental part of the algorithm is responsible of updating existing information or creating new data categories and the decremental part, to efficiently evaluate the system's performance facing partial or total loss of data. The proposed algorithm has been applied to the face recognition problem. More generally, the current approach can be extended to large-scale classification problems, to limit the memory requirements for optimal data representation and storage.


2017 ◽  
Author(s):  
Stephanie R Debats ◽  
Lyndon D Estes ◽  
David R Thompson ◽  
Kelly K Caylor

Sub-Saharan Africa and other developing regions of the world are dominated by smallholder farms, which are characterized by small, heterogeneous, and often indistinct field patterns. In previous work, we developed an algorithm for mapping both smallholder and commercial agricultural fields that includes efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. In this paper, we demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples.


2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


Author(s):  
Marta B. Silva ◽  
Ely D. Kovetz ◽  
Garrett K. Keating ◽  
Azadeh Moradinezhad Dizgah ◽  
Matthieu Bethermin ◽  
...  

AbstractThis paper outlines the science case for line-intensity mapping with a space-borne instrument targeting the sub-millimeter (microwaves) to the far-infrared (FIR) wavelength range. Our goal is to observe and characterize the large-scale structure in the Universe from present times to the high redshift Epoch of Reionization. This is essential to constrain the cosmology of our Universe and form a better understanding of various mechanisms that drive galaxy formation and evolution. The proposed frequency range would make it possible to probe important metal cooling lines such as [CII] up to very high redshift as well as a large number of rotational lines of the CO molecule. These can be used to trace molecular gas and dust evolution and constrain the buildup in both the cosmic star formation rate density and the cosmic infrared background (CIB). Moreover, surveys at the highest frequencies will detect FIR lines which are used as diagnostics of galaxies and AGN. Tomography of these lines over a wide redshift range will enable invaluable measurements of the cosmic expansion history at epochs inaccessible to other methods, competitive constraints on the parameters of the standard model of cosmology, and numerous tests of dark matter, dark energy, modified gravity and inflation. To reach these goals, large-scale structure must be mapped over a wide range in frequency to trace its time evolution and the surveyed area needs to be very large to beat cosmic variance. Only a space-borne mission can properly meet these requirements.


2021 ◽  
Vol 11 (7) ◽  
pp. 679
Author(s):  
Vincenzo Alfano ◽  
Mariachiara Longarzo ◽  
Giulia Mele ◽  
Marcello Esposito ◽  
Marco Aiello ◽  
...  

Apathy is a neuropsychiatric condition characterized by reduced motivation, initiative, and interest in daily life activities, and it is commonly reported in several neurodegenerative disorders. The study aims to investigate large-scale brain networks involved in apathy syndrome in patients with frontotemporal dementia (FTD) and Parkinson’s disease (PD) compared to a group of healthy controls (HC). The study sample includes a total of 60 subjects: 20 apathetic FTD and PD patients, 20 non apathetic FTD and PD patients, and 20 HC matched for age. Two disease-specific apathy-evaluation scales were used to measure the presence of apathy in FTD and PD patients; in the same day, a 3T brain magnetic resonance imaging (MRI) with structural and resting-state functional (fMRI) sequences was acquired. Differences in functional connectivity (FC) were assessed between apathetic and non-apathetic patients with and without primary clinical diagnosis revealed, using a whole-brain, seed-to-seed approach. A significant hypoconnectivity between apathetic patients (both FTD and PD) and HC was detected between left planum polare and both right pre- or post-central gyrus. Finally, to investigate whether such neural alterations were due to the underlying neurodegenerative pathology, we replicated the analysis by considering two independent patients’ samples (i.e., non-apathetic PD and FTD). In these groups, functional differences were no longer detected. These alterations may subtend the involvement of neural pathways implicated in a specific reduction of information/elaboration processing and motor outcome in apathetic patients.


Liver Cancer ◽  
2020 ◽  
Vol 9 (6) ◽  
pp. 734-743
Author(s):  
Kazuya Kariyama ◽  
Kazuhiro Nouso ◽  
Atsushi Hiraoka ◽  
Akiko Wakuta ◽  
Ayano Oonishi ◽  
...  

<b><i>Introduction:</i></b> The ALBI score is acknowledged as the gold standard for the assessment of liver function in patients with hepatocellular carcinoma (HCC). Unlike the Child-Pugh score, the ALBI score uses only objective parameters, albumin (Alb) and total bilirubin (T.Bil), enabling a better evaluation. However, the complex calculation of the ALBI score limits its applicability. Therefore, we developed a simplified ALBI score, based on data from a large-scale HCC database.We used the data of 5,249 naïve HCC cases registered in eight collaborating hospitals. <b><i>Methods:</i></b> We developed a new score, the EZ (Easy)-ALBI score, based on regression coefficients of Alb and T.Bil for survival risk in a multivariate Cox proportional hazard model. We also developed the EZ-ALBI grade and EZ-ALBI-T grade as alternative options for the ALBI grade and ALBI-T grade and evaluated their stratifying ability. <b><i>Results:</i></b> The equation used to calculate the EZ-ALBI score was simple {[T.Bil (mg/dL)] – [9 × Alb (g/dL)]}; this value highly correlated with the ALBI score (correlation coefficient, 0.981; <i>p</i> &#x3c; 0.0001). The correlation was preserved across different Barcelona clinic liver cancer grade scores (regression coefficient, 0.93–0.98) and across different hospitals (regression coefficient, 0.98–0.99), indicating good generalizability. Although a good agreement was observed between ALBI and EZ-ALBI, discrepancies were observed in patients with poor liver function (T.Bil, ≥3 mg/dL; regression coefficient, 0.877). The stratifying ability of EZ-ALBI grade and EZ-ALBI-T grade were good and their Akaike’s information criterion values (35,897 and 34,812, respectively) were comparable with those of ALBI grade and ALBI-T grade (35,914 and 34,816, respectively). <b><i>Conclusions:</i></b> The EZ-ALBI score, EZ-ALBI grade, and EZ-ALBI-T grade are useful, simple scores, which might replace the conventional ALBI score in the future.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Rieke Fruengel ◽  
Timo Bröhl ◽  
Thorsten Rings ◽  
Klaus Lehnertz

AbstractPrevious research has indicated that temporal changes of centrality of specific nodes in human evolving large-scale epileptic brain networks carry information predictive of impending seizures. Centrality is a fundamental network-theoretical concept that allows one to assess the role a node plays in a network. This concept allows for various interpretations, which is reflected in a number of centrality indices. Here we aim to achieve a more general understanding of local and global network reconfigurations during the pre-seizure period as indicated by changes of different node centrality indices. To this end, we investigate—in a time-resolved manner—evolving large-scale epileptic brain networks that we derived from multi-day, multi-electrode intracranial electroencephalograpic recordings from a large but inhomogeneous group of subjects with pharmacoresistant epilepsies with different anatomical origins. We estimate multiple centrality indices to assess the various roles the nodes play while the networks transit from the seizure-free to the pre-seizure period. Our findings allow us to formulate several major scenarios for the reconfiguration of an evolving epileptic brain network prior to seizures, which indicate that there is likely not a single network mechanism underlying seizure generation. Rather, local and global aspects of the pre-seizure network reconfiguration affect virtually all network constituents, from the various brain regions to the functional connections between them.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-28
Author(s):  
Xueyan Liu ◽  
Bo Yang ◽  
Hechang Chen ◽  
Katarzyna Musial ◽  
Hongxu Chen ◽  
...  

Stochastic blockmodel (SBM) is a widely used statistical network representation model, with good interpretability, expressiveness, generalization, and flexibility, which has become prevalent and important in the field of network science over the last years. However, learning an optimal SBM for a given network is an NP-hard problem. This results in significant limitations when it comes to applications of SBMs in large-scale networks, because of the significant computational overhead of existing SBM models, as well as their learning methods. Reducing the cost of SBM learning and making it scalable for handling large-scale networks, while maintaining the good theoretical properties of SBM, remains an unresolved problem. In this work, we address this challenging task from a novel perspective of model redefinition. We propose a novel redefined SBM with Poisson distribution and its block-wise learning algorithm that can efficiently analyse large-scale networks. Extensive validation conducted on both artificial and real-world data shows that our proposed method significantly outperforms the state-of-the-art methods in terms of a reasonable trade-off between accuracy and scalability. 1


Sign in / Sign up

Export Citation Format

Share Document