scholarly journals Neural mechanisms of context-dependent segmentation tested on large-scale recording data

2021 ◽  
Author(s):  
Toshitake Asabuki ◽  
Tomoki Fukai

The brain performs various cognitive functions by learning the spatiotemporal salient features of the environment. This learning likely requires unsupervised segmentation of hierarchically organized spike sequences, but the underlying neural mechanism is only poorly understood. Here, we show that a recurrent gated network of neurons with dendrites can context-dependently solve difficult segmentation tasks. Dendrites in this model learn to predict somatic responses in a self-supervising manner while recurrent connections learn a context-dependent gating of dendro-somatic current flows to minimize a prediction error. These connections select particular information suitable for the given context from input features redundantly learned by the dendrites. The model selectively learned salient segments in complex synthetic sequences. Furthermore, the model was also effective for detecting multiple cell assemblies repeating in large-scale calcium imaging data of more than 6,500 cortical neurons. Our results suggest that recurrent gating and dendrites are crucial for cortical learning of context-dependent segmentation tasks.

2020 ◽  
Author(s):  
Ashwini G. Naik ◽  
Robert V. Kenyon ◽  
Aynaz Taheri ◽  
Tanya Berger-Wolf ◽  
Baher Ibrahim ◽  
...  

AbstractBackgroundUnderstanding functional correlations between the activities of neuron populations is vital for the analysis of neuronal networks. Analyzing large-scale neuroimaging data obtained from hundreds of neurons simultaneously poses significant visualization challenges. We developed V-NeuroStack, a novel network visualization tool to visualize data obtained using calcium imaging of spontaneous activity of cortical neurons in a mouse brain slice.New MethodV-NeuroStack creates 3D time stacks by stacking 2D time frames for a period of 600 seconds. It provides a web interface that enables exploration and analysis of data using a combination of 3D and 2D visualization techniques.Comparison with existing MethodsPrevious attempts to analyze such data have been limited by the tools available to visualize large numbers of correlated activity traces. V-NeuroStack can scale data sets with at least a few thousand temporal snapshots.ResultsV-NeuroStack’s 3D view is used to explore patterns in the dynamic large-scale correlations between neurons over time. The 2D view is used to examine any timestep of interest in greater detail. Furthermore, a dual-line graph provides the ability to explore the raw and first-derivative values of a single neuron or a functional cluster of neurons.ConclusionsV-NeuroStack enables easy exploration and analysis of large spatio-temporal datasets using two visualization paradigms: (a) Space-Time cube (b)Two-dimensional networks, via web interface. It will support future advancements in in vitro and in vivo data capturing techniques and can bring forth novel hypotheses by permitting unambiguous visualization of large-scale patterns in the neuronal activity data.


GigaScience ◽  
2020 ◽  
Vol 9 (12) ◽  
Author(s):  
Ariel Rokem ◽  
Kendrick Kay

Abstract Background Ridge regression is a regularization technique that penalizes the L2-norm of the coefficients in linear regression. One of the challenges of using ridge regression is the need to set a hyperparameter (α) that controls the amount of regularization. Cross-validation is typically used to select the best α from a set of candidates. However, efficient and appropriate selection of α can be challenging. This becomes prohibitive when large amounts of data are analyzed. Because the selected α depends on the scale of the data and correlations across predictors, it is also not straightforwardly interpretable. Results The present work addresses these challenges through a novel approach to ridge regression. We propose to reparameterize ridge regression in terms of the ratio γ between the L2-norms of the regularized and unregularized coefficients. We provide an algorithm that efficiently implements this approach, called fractional ridge regression, as well as open-source software implementations in Python and matlab (https://github.com/nrdg/fracridge). We show that the proposed method is fast and scalable for large-scale data problems. In brain imaging data, we demonstrate that this approach delivers results that are straightforward to interpret and compare across models and datasets. Conclusion Fractional ridge regression has several benefits: the solutions obtained for different γ are guaranteed to vary, guarding against wasted calculations; and automatically span the relevant range of regularization, avoiding the need for arduous manual exploration. These properties make fractional ridge regression particularly suitable for analysis of large complex datasets.


Author(s):  
Aniket Bhattacharya ◽  
Vineet Jha ◽  
Khushboo Singhal ◽  
Mahar Fatima ◽  
Dayanidhi Singh ◽  
...  

Abstract Alu repeats contribute to phylogenetic novelties in conserved regulatory networks in primates. Our study highlights how exonized Alus could nucleate large-scale mRNA-miRNA interactions. Using a functional genomics approach, we characterize a transcript isoform of an orphan gene, CYP20A1 (CYP20A1_Alu-LT) that has exonization of 23 Alus in its 3’UTR. CYP20A1_Alu-LT, confirmed by 3’RACE, is an outlier in length (9 kb 3’UTR) and widely expressed. Using publically available datasets, we demonstrate its expression in higher primates and presence in single nucleus RNA-seq of 15928 human cortical neurons. miRanda predicts ∼4700 miRNA recognition elements (MREs) for ∼1000 miRNAs, primarily originated within these 3’UTR-Alus. CYP20A1_Alu-LT could be a potential multi-miRNA sponge as it harbors ≥10 MREs for 140 miRNAs and has cytosolic localization. We further tested whether expression of CYP20A1_Alu-LT correlates with mRNAs harboring similar MRE targets. RNA-seq with conjoint miRNA-seq analysis was done in primary human neurons where we observed CYP20A1_Alu-LT to be downregulated during heat shock response and upregulated in HIV1-Tat treatment. 380 genes were positively correlated with its expression (significantly downregulated in heat shock and upregulated in Tat) and they harbored MREs for nine expressed miRNAs which were also enriched in CYP20A1_Alu-LT. MREs were significantly enriched in these 380 genes compared to random sets of differentially expressed genes (p = 8.134e-12). Gene ontology suggested involvement of these genes in neuronal development and hemostasis pathways thus proposing a novel component of Alu-miRNA mediated transcriptional modulation that could govern specific physiological outcomes in higher primates.


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. S47-S61 ◽  
Author(s):  
Paul Sava ◽  
Oleg Poliannikov

The fidelity of depth seismic imaging depends on the accuracy of the velocity models used for wavefield reconstruction. Models can be decomposed in two components, corresponding to large-scale and small-scale variations. In practice, the large-scale velocity model component can be estimated with high accuracy using repeated migration/tomography cycles, but the small-scale component cannot. When the earth has significant small-scale velocity components, wavefield reconstruction does not completely describe the recorded data, and migrated images are perturbed by artifacts. There are two possible ways to address this problem: (1) improve wavefield reconstruction by estimating more accurate velocity models and image using conventional techniques (e.g., wavefield crosscorrelation) or (2) reconstruct wavefields with conventional methods using the known background velocity model but improve the imaging condition to alleviate the artifacts caused by the imprecise reconstruction. Wedescribe the unknown component of the velocity model as a random function with local spatial correlations. Imaging data perturbed by such random variations is characterized by statistical instability, i.e., various wavefield components image at wrong locations that depend on the actual realization of the random model. Statistical stability can be achieved by preprocessing the reconstructed wavefields prior to the imaging condition. We use Wigner distribution functions to attenuate the random noise present in the reconstructed wavefields, parameterized as a function of image coordinates. Wavefield filtering using Wigner distribution functions and conventional imaging can be lumped together into a new form of imaging condition that we call an interferometric imaging condition because of its similarity to concepts from recent work on interferometry. The interferometric imaging condition can be formulated both for zero-offset and for multioffset data, leading to robust, efficient imaging procedures that effectively attenuate imaging artifacts caused by unknown velocity models.


Author(s):  
Sepehr Fathizadan ◽  
Feng Ju ◽  
Kyle Rowe ◽  
Alex Fiechter ◽  
Nils Hofmann

Abstract Production efficiency and product quality need to be addressed simultaneously to ensure the reliability of large scale additive manufacturing. Specifically, print surface temperature plays a critical role in determining the quality characteristics of the product. Moreover, heat transfer via conduction as a result of spatial correlation between locations on the surface of large and complex geometries necessitates the employment of more robust methodologies to extract and monitor the data. In this paper, we propose a framework for real-time data extraction from thermal images as well as a novel method for controlling layer time during the printing process. A FLIR™ thermal camera captures and stores the stream of images from the print surface temperature while the Thermwood Large Scale Additive Manufacturing (LSAM™) machine is printing components. A set of digital image processing tasks were performed to extract the thermal data. Separate regression models based on real-time thermal imaging data are built on each location on the surface to predict the associated temperatures. Subsequently, a control method is proposed to find the best time for printing the next layer given the predictions. Finally, several scenarios based on the cooling dynamics of surface structure were defined and analyzed, and the results were compared to the current fixed layer time policy. It was concluded that the proposed method can significantly increase the efficiency by reducing the overall printing time while preserving the quality.


Author(s):  
Arsenii Shirokov ◽  
Denis Kuplyakov ◽  
Anton Konushin

The article deals with the problem of counting cars in large-scale video surveillance systems. The proposed method is based on car tracking and counting the number of tracks intersecting the given signal line. We use a distributed tracking algorithm. It reduces the amount of necessary computational resources and increases performance up to realtime by detecting vehicles in a sparse set of frames. We adapted and modified the approach previously proposed for people tracking. Proposed improvement of the speed estimation module and refinement of the motion model reduced the detection frequency by 3 times. The experimental evaluation shows that the proposed algorithm allows reaching an acceptable counting quality with a detection frequency of 3 Hz.


Author(s):  
Josef Los ◽  
Jiří Fryč ◽  
Zdeněk Konrád

The method of drying maize for grain has been recently employed on a large scale in the Czech Republic not only thanks to new maize hybrids but also thanks to the existence of new models of drying plants. One of the new post-harvest lines is a plant in Lipoltice (mobile dryer installed in 2010, storage base in 2012) where basic operational measurements were made of the energy intensiveness of drying and operating parameters of the maize dryer were evaluated. The process of maize drying had two stages, i.e. pre-drying from the initial average grain humidity of 28.55% to 19.6% in the first stage, and the additional drying from 16.7% to a final storage grain humidity of 13.7%. Mean volumes of natural gas consumed per 1 t% for drying in the first and second stage amounted to 1.275 m3 and 1.56 m3, respectively. The total mean consumption of electric energy per 1 t% was calculated to be 1.372 kWh for the given configuration of the post-harvest line.


2019 ◽  
Author(s):  
Daniel A Llano ◽  
Chihua Ma ◽  
Umberto Di Fabrizio ◽  
Aynaz Taheri ◽  
Kevin A. Stebbings ◽  
...  

AbstractNetwork analysis of large-scale neuroimaging data has proven to be a particularly challenging computational problem. In this study, we adapt a novel analytical tool, known as the community dynamic inference method (CommDy), which was inspired by social network theory, for the study of brain imaging data from an aging mouse model. CommDy has been successfully used in other domains in biology; this report represents its first use in neuroscience. We used CommDy to investigate aging-related changes in network parameters in the auditory and motor cortices using flavoprotein autofluorescence imaging in brain slices and in vivo. Analysis of spontaneous activations in the auditory cortex of slices taken from young and aged animals demonstrated that cortical networks in aged brains were highly fragmented compared to networks observed in young animals. Specifically, the degree of connectivity of each activated node in the aged brains was significantly lower than those seen in the young brain, and multivariate analyses of all derived network metrics showed distinct clusters of these metrics in young vs. aged brains. CommDy network metrics were then used to build a random-forests classifier based on NMDA-receptor blockade data, which successfully recapitulated the aging findings, suggesting that the excitatory synaptic substructure of the auditory cortex may be altered during aging. A similar aging-related decline in network connectivity was also observed in spontaneous activity obtained from the awake motor cortex, suggesting that the findings in the auditory cortex are reflections of general mechanisms that occur during aging. Therefore, CommDy therefore provides a new dynamic network analytical tool to study the brain and provides links between network-level and synaptic-level dysfunction in the aging brain.


2020 ◽  
Author(s):  
Fayyaz Minhas ◽  
Dimitris Grammatopoulos ◽  
Lawrence Young ◽  
Imran Amin ◽  
David Snead ◽  
...  

AbstractOne of the challenges in the current COVID-19 crisis is the time and cost of performing tests especially for large-scale population surveillance. Since, the probability of testing positive in large population studies is expected to be small (<15%), therefore, most of the test outcomes will be negative. Here, we propose the use of agglomerative sampling which can prune out multiple negative cases in a single test by intelligently combining samples from different individuals. The proposed scheme builds on the assumption that samples from the population may not be independent of each other. Our simulation results show that the proposed sampling strategy can significantly increase testing capacity under resource constraints: on average, a saving of ~40% tests can be expected assuming a positive test probability of 10% across the given samples. The proposed scheme can also be used in conjunction with heuristic or Machine Learning guided clustering for improving the efficiency of large-scale testing further. The code for generating the simulation results for this work is available here: https://github.com/foxtrotmike/AS.


2021 ◽  
Author(s):  
Anita Bandrowski ◽  
Jeffrey S. Grethe ◽  
Anna Pilko ◽  
Tom Gillespie ◽  
Gabi Pine ◽  
...  

AbstractThe NIH Common Fund’s Stimulating Peripheral Activity to Relieve Conditions (SPARC) initiative is a large-scale program that seeks to accelerate the development of therapeutic devices that modulate electrical activity in nerves to improve organ function. Integral to the SPARC program are the rich anatomical and functional datasets produced by investigators across the SPARC consortium that provide key details about organ-specific circuitry, including structural and functional connectivity, mapping of cell types and molecular profiling. These datasets are provided to the research community through an open data platform, the SPARC Portal. To ensure SPARC datasets are Findable, Accessible, Interoperable and Reusable (FAIR), they are all submitted to the SPARC portal following a standard scheme established by the SPARC Curation Team, called the SPARC Data Structure (SDS). Inspired by the Brain Imaging Data Structure (BIDS), the SDS has been designed to capture the large variety of data generated by SPARC investigators who are coming from all fields of biomedical research. Here we present the rationale and design of the SDS, including a description of the SPARC curation process and the automated tools for complying with the SDS, including the SDS validator and Software to Organize Data Automatically (SODA) for SPARC. The objective is to provide detailed guidelines for anyone desiring to comply with the SDS. Since the SDS are suitable for any type of biomedical research data, it can be adopted by any group desiring to follow the FAIR data principles for managing their data, even outside of the SPARC consortium. Finally, this manuscript provides a foundational framework that can be used by any organization desiring to either adapt the SDS to suit the specific needs of their data or simply desiring to design their own FAIR data sharing scheme from scratch.


Sign in / Sign up

Export Citation Format

Share Document