scholarly journals Human brain atlasing: past, present and future

2017 ◽  
Vol 30 (6) ◽  
pp. 504-519 ◽  
Author(s):  
Wieslaw L Nowinski

We have recently witnessed an explosion of large-scale initiatives and projects addressing mapping, modeling, simulation and atlasing of the human brain, including the BRAIN Initiative, the Human Brain Project, the Human Connectome Project (HCP), the Big Brain, the Blue Brain Project, the Allen Brain Atlas, the Brainnetome, among others. Besides these large and international initiatives, there are numerous mid-size and small brain atlas-related projects. My contribution to these global efforts has been to create adult human brain atlases in health and disease, and to develop atlas-based applications. For over two decades with my R&D lab I developed 35 brain atlases, licensed to 67 companies and made available in about 100 countries. This paper has two objectives. First, it provides an overview of the state of the art in brain atlasing. Second, as it is already 20 years from the release of our first brain atlas, I summarise my past and present efforts, share my experience in atlas creation, validation and commercialisation, compare with the state of the art, and propose future directions.

Author(s):  
Siva Reddy ◽  
Mirella Lapata ◽  
Mark Steedman

In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the Free917 and WebQuestions benchmark datasets show our semantic parser improves over the state of the art.


2021 ◽  
Vol 376 (1821) ◽  
pp. 20190765 ◽  
Author(s):  
Giovanni Pezzulo ◽  
Joshua LaPalme ◽  
Fallon Durant ◽  
Michael Levin

Nervous systems’ computational abilities are an evolutionary innovation, specializing and speed-optimizing ancient biophysical dynamics. Bioelectric signalling originated in cells' communication with the outside world and with each other, enabling cooperation towards adaptive construction and repair of multicellular bodies. Here, we review the emerging field of developmental bioelectricity, which links the field of basal cognition to state-of-the-art questions in regenerative medicine, synthetic bioengineering and even artificial intelligence. One of the predictions of this view is that regeneration and regulative development can restore correct large-scale anatomies from diverse starting states because, like the brain, they exploit bioelectric encoding of distributed goal states—in this case, pattern memories. We propose a new interpretation of recent stochastic regenerative phenotypes in planaria, by appealing to computational models of memory representation and processing in the brain. Moreover, we discuss novel findings showing that bioelectric changes induced in planaria can be stored in tissue for over a week, thus revealing that somatic bioelectric circuits in vivo can implement a long-term, re-writable memory medium. A consideration of the mechanisms, evolution and functionality of basal cognition makes novel predictions and provides an integrative perspective on the evolution, physiology and biomedicine of information processing in vivo . This article is part of the theme issue ‘Basal cognition: multicellularity, neurons and the cognitive lens’.


2020 ◽  
Vol 34 (05) ◽  
pp. 7554-7561
Author(s):  
Pengxiang Cheng ◽  
Katrin Erk

Recent progress in NLP witnessed the development of large-scale pre-trained language models (GPT, BERT, XLNet, etc.) based on Transformer (Vaswani et al. 2017), and in a range of end tasks, such models have achieved state-of-the-art results, approaching human performance. This clearly demonstrates the power of the stacked self-attention architecture when paired with a sufficient number of layers and a large amount of pre-training data. However, on tasks that require complex and long-distance reasoning where surface-level cues are not enough, there is still a large gap between the pre-trained models and human performance. Strubell et al. (2018) recently showed that it is possible to inject knowledge of syntactic structure into a model through supervised self-attention. We conjecture that a similar injection of semantic knowledge, in particular, coreference information, into an existing model would improve performance on such complex problems. On the LAMBADA (Paperno et al. 2016) task, we show that a model trained from scratch with coreference as auxiliary supervision for self-attention outperforms the largest GPT-2 model, setting the new state-of-the-art, while only containing a tiny fraction of parameters compared to GPT-2. We also conduct a thorough analysis of different variants of model architectures and supervision configurations, suggesting future directions on applying similar techniques to other problems.


2018 ◽  
Author(s):  
RL van den Brink ◽  
S Nieuwenhuis ◽  
TH Donner

ABSTRACTThe widely projecting catecholaminergic (norepinephrine and dopamine) neurotransmitter systems profoundly shape the state of neuronal networks in the forebrain. Current models posit that the effects of catecholaminergic modulation on network dynamics are homogenous across the brain. However, the brain is equipped with a variety of catecholamine receptors with distinct functional effects and heterogeneous density across brain regions. Consequently, catecholaminergic effects on brain-wide network dynamics might be more spatially specific than assumed. We tested this idea through the analysis of functional magnetic resonance imaging (fMRI) measurements performed in humans (19 females, 5 males) at ‘rest’ under pharmacological (atomoxetine-induced) elevation of catecholamine levels. We used a linear decomposition technique to identify spatial patterns of correlated fMRI signal fluctuations that were either increased or decreased by atomoxetine. This yielded two distinct spatial patterns, each expressing reliable and specific drug effects. The spatial structure of both fluctuation patterns resembled the spatial distribution of the expression of catecholamine receptor genes: α1 norepinephrine receptors (for the fluctuation pattern: placebo > atomoxetine), ‘D2-like’ dopamine receptors (pattern: atomoxetine > placebo), and β norepinephrine receptors (for both patterns, with correlations of opposite sign). We conclude that catecholaminergic effects on the forebrain are spatially more structured than traditionally assumed and at least in part explained by the heterogeneous distribution of various catecholamine receptors. Our findings link catecholaminergic effects on large-scale brain networks to low-level characteristics of the underlying neurotransmitter systems. They also provide key constraints for the development of realistic models of neuromodulatory effects on large-scale brain network dynamics.SIGNIFICANCE STATEMENTThe catecholamines norepinephrine and dopamine are an important class of modulatory neurotransmitters. Because of the widespread and diffuse release of these neuromodulators, it has commonly been assumed that their effects on neural interactions are homogenous across the brain. Here, we present results from the human brain that challenge this view. We pharmacologically increased catecholamine levels and imaged the effects on the spontaneous covariations between brain-wide fMRI signals at ‘rest’. We identified two distinct spatial patterns of covariations: one that was amplified and another that was suppressed by catecholamines. Each pattern was associated with the heterogeneous spatial distribution of the expression of distinct catecholamine receptor genes. Our results provide novel insights into the catecholaminergic modulation of large-scale human brain dynamics.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-39
Author(s):  
Thanh Tuan Nguyen ◽  
Thanh Phuong Nguyen

Representing dynamic textures (DTs) plays an important role in many real implementations in the computer vision community. Due to the turbulent and non-directional motions of DTs along with the negative impacts of different factors (e.g., environmental changes, noise, illumination, etc.), efficiently analyzing DTs has raised considerable challenges for the state-of-the-art approaches. For 20 years, many different techniques have been introduced to handle the above well-known issues for enhancing the performance. Those methods have shown valuable contributions, but the problems have been incompletely dealt with, particularly recognizing DTs on large-scale datasets. In this article, we present a comprehensive taxonomy of DT representation in order to purposefully give a thorough overview of the existing methods along with overall evaluations of their obtained performances. Accordingly, we arrange the methods into six canonical categories. Each of them is then taken in a brief presentation of its principal methodology stream and various related variants. The effectiveness levels of the state-of-the-art methods are then investigated and thoroughly discussed with respect to quantitative and qualitative evaluations in classifying DTs on benchmark datasets. Finally, we point out several potential applications and the remaining challenges that should be addressed in further directions. In comparison with two existing shallow DT surveys (i.e., the first one is out of date as it was made in 2005, while the newer one (published in 2016) is an inadequate overview), we believe that our proposed comprehensive taxonomy not only provides a better view of DT representation for the target readers but also stimulates future research activities.


Author(s):  
Chenggang Yan ◽  
Tong Teng ◽  
Yutao Liu ◽  
Yongbing Zhang ◽  
Haoqian Wang ◽  
...  

The difficulty of no-reference image quality assessment (NR IQA) often lies in the lack of knowledge about the distortion in the image, which makes quality assessment blind and thus inefficient. To tackle such issue, in this article, we propose a novel scheme for precise NR IQA, which includes two successive steps, i.e., distortion identification and targeted quality evaluation. In the first step, we employ the well-known Inception-ResNet-v2 neural network to train a classifier that classifies the possible distortion in the image into the four most common distortion types, i.e., Gaussian white noise (WN), Gaussian blur (GB), jpeg compression (JPEG), and jpeg2000 compression (JP2K). Specifically, the deep neural network is trained on the large-scale Waterloo Exploration database, which ensures the robustness and high performance of distortion classification. In the second step, after determining the distortion type of the image, we then design a specific approach to quantify the image distortion level, which can estimate the image quality specially and more precisely. Extensive experiments performed on LIVE, TID2013, CSIQ, and Waterloo Exploration databases demonstrate that (1) the accuracy of our distortion classification is higher than that of the state-of-the-art distortion classification methods, and (2) the proposed NR IQA method outperforms the state-of-the-art NR IQA methods in quantifying the image quality.


Author(s):  
Vít Bukač ◽  
Vashek Matyáš

In this chapter, the reader explores both the founding ideas and the state-of-the-art research on host-based intrusion detection systems. HIDSs are categorized by their intrusion detection method. Each category is thoroughly investigated, and its limitations and benefits are discussed. Seminal research findings and ideas are presented and supplied with comments. Separate sections are devoted to the protection against tampering and to the HIDS evasion techniques that are employed by attackers. Existing research trends are highlighted, and possible future directions are suggested.


Sign in / Sign up

Export Citation Format

Share Document