scholarly journals A Scalable and Modular Automated Pipeline for Stitching of Large Electron Microscopy Datasets

2021 ◽  
Author(s):  
Gayathri Mahalingam ◽  
Russel Torres ◽  
Daniel Kapner ◽  
Eric T Trautman ◽  
Tim Fliss ◽  
...  

Serial section Electron Microscopy can produce high throughput imaging of large biological specimen volumes. The high-resolution images are necessary to reconstruct dense neural wiring diagrams in the brain, so called connectomes. A high fidelity volume assembly is required to correctly reconstruct neural anatomy and synaptic connections. It involves seamless 2D stitching of the images within a serial section followed by 3D alignment of the stitched sections. The high throughput of ssEM necessitates 2D stitching to be done at the pace of imaging, which currently produces tens of terabytes per day. To achieve this, we present a modular volume assembly software pipeline ASAP(Assembly Stitching and Alignment Pipeline) that is scalable and parallelized to work with distributed systems. The pipeline is built on top of the Render [18] services used in the volume assembly of the brain of adult Drosophila melanogaster [2]. It achieves high throughput by operating on the meta-data and transformations of each image stored in a database, thus eliminating the need to render intermediate output. The modularity of ASAP allows for easy adaptation to new algorithms without significant changes to the workflow. The software pipeline includes a complete set of tools to do stitching, automated quality control, 3D section alignment, and rendering of the assembled volume to disk. We also implemented a workflow engine that executes the volume assembly workflow in an automated fashion triggered following the transfer of raw data. ASAP has been successfully utilized for continuous processing of several large-scale datasets of the mouse visual cortex and human brain samples including one cubic millimeter of mouse visual cortex [1, 25]. The pipeline also has multi-channel processing capabilities and can be applied to fluorescence and multi-modal datasets like array tomography.

2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2019 ◽  
Author(s):  
Leyla Tarhan ◽  
Talia Konkle

Humans observe a wide range of actions in their surroundings. How is the visual cortex organized to process this diverse input? Using functional neuroimaging, we measured brain responses while participants viewed short videos of everyday actions, then probed the structure in these responses using voxel-wise encoding modeling. Responses were well fit by feature spaces that capture the body parts involved in an action and the action’s targets (i.e. whether the action was directed at an object, another person, the actor, and space). Clustering analyses revealed five large-scale networks that summarized the voxel tuning: one related to social aspects of an action, and four related to the scale of the interaction envelope, ranging from fine-scale manipulations directed at objects, to large-scale whole-body movements directed at distant locations. We propose that these networks reveal the major representational joints in how actions are processed by visual regions of the brain.Significance StatementHow does the brain perceive other people’s actions? Prior work has established that much of the visual cortex is active when observing others’ actions. However, this activity reflects a wide range of processes, from identifying a movement’s direction to recognizing its social content. We investigated how these diverse processes are organized within the visual cortex. We found that five networks respond during action observation: one that is involved in processing actions’ social content, and four that are involved in processing agent-object interactions and the scale of the effect that these actions have on the world (its “interaction envelope”). Based on these findings, we propose that sociality and interaction envelope size are two of the major features that organize action perception in the visual cortex.


2021 ◽  
Author(s):  
Tarek Jabri ◽  
Jason N MacLean

Complex systems can be defined by "sloppy" dimensions, meaning that their behavior is unmodified by large changes to specific parameter combinations, and "stiff" dimensions whose changes result in considerable modifications. In the case of the neocortex, sloppiness in synaptic architectures would be crucial to allow for the maintenance of spiking dynamics in the normal range despite a diversity of inputs and both short- and long-term changes to connectivity. Using simulations on neural networks with spiking matched to murine visual cortex, we determined the stiff and sloppy parameters of synaptic architectures across three classes of input (brief, continuous, and cyclical). Large-scale algorithmically-generated connectivity parameter values revealed that specific combinations of excitatory and inhibitory connectivity are stiff and that all other architectural details are sloppy. Stiff dimensions are consistent across a range of different input classes with self-sustaining synaptic architectures occupying a smaller subspace as compared to the other input classes. We also find that experimentally estimated connectivity probabilities from mouse visual cortex are similarly stiff and sloppy when compared to the architectures that we identified algorithmically. This suggests that simple statistical descriptions of spiking dynamics are a sufficient and parsimonious description of neocortical activity when examining structure-function relationships at the mesoscopic scale. Moreover, this study provides further evidence of the importance of the interrelationship of excitatory and inhibitory connectivity to establish and maintain stable spiking dynamical regimes in neocortex.


2019 ◽  
Author(s):  
Erik C. Johnson ◽  
Miller Wilt ◽  
Luis M. Rodriguez ◽  
Raphael Norman-Tenazas ◽  
Corban Rivera ◽  
...  

ABSTRACTEmerging neuroimaging datasets (collected through modalities such as Electron Microscopy, Calcium Imaging, or X-ray Microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational expertise or resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. We developed an ecosystem of neuroimaging data analysis pipelines that utilize open source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, that connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets, but may be applied to similar problems in other domains.


2016 ◽  
Vol 21 (8) ◽  
pp. 832-841 ◽  
Author(s):  
Yufeng Zhai ◽  
Kaisheng Chen ◽  
Yang Zhong ◽  
Bin Zhou ◽  
Edward Ainscow ◽  
...  

The correction or removal of signal errors in high-throughput screening (HTS) data is critical to the identification of high-quality lead candidates. Although a number of strategies have been previously developed to correct systematic errors and to remove screening artifacts, they are not universally effective and still require fair amount of human intervention. We introduce a fully automated quality control (QC) pipeline that can correct generic interplate systematic errors and remove intraplate random artifacts. The new pipeline was first applied to ~100 large-scale historical HTS assays; in silico analysis showed auto-QC led to a noticeably stronger structure-activity relationship. The method was further tested in several independent HTS runs, where QC results were sampled for experimental validation. Significantly increased hit confirmation rates were obtained after the QC steps, confirming that the proposed method was effective in enriching true-positive hits. An implementation of the algorithm is available to the screening community.


2019 ◽  
Vol 25 (S2) ◽  
pp. 1038-1039
Author(s):  
Ryan Lane ◽  
Pascal de Boer ◽  
Ben N.G. Giepmans ◽  
Jacob P. Hoogenboom

GigaScience ◽  
2020 ◽  
Vol 9 (12) ◽  
Author(s):  
Erik C Johnson ◽  
Miller Wilt ◽  
Luis M Rodriguez ◽  
Raphael Norman-Tenazas ◽  
Corban Rivera ◽  
...  

Abstract Background Emerging neuroimaging datasets (collected with imaging techniques such as electron microscopy, optical microscopy, or X-ray microtomography) describe the location and properties of neurons and their connections at unprecedented scale, promising new ways of understanding the brain. These modern imaging techniques used to interrogate the brain can quickly accumulate gigabytes to petabytes of structural brain imaging data. Unfortunately, many neuroscience laboratories lack the computational resources to work with datasets of this size: computer vision tools are often not portable or scalable, and there is considerable difficulty in reproducing results or extending methods. Results We developed an ecosystem of neuroimaging data analysis pipelines that use open-source algorithms to create standardized modules and end-to-end optimized approaches. As exemplars we apply our tools to estimate synapse-level connectomes from electron microscopy data and cell distributions from X-ray microtomography data. To facilitate scientific discovery, we propose a generalized processing framework, which connects and extends existing open-source projects to provide large-scale data storage, reproducible algorithms, and workflow execution engines. Conclusions Our accessible methods and pipelines demonstrate that approaches across multiple neuroimaging experiments can be standardized and applied to diverse datasets. The techniques developed are demonstrated on neuroimaging datasets but may be applied to similar problems in other domains.


2021 ◽  
Vol 15 ◽  
Author(s):  
Daisuke Koga ◽  
Satoshi Kusumi ◽  
Masahiro Shibata ◽  
Tsuyoshi Watanabe

Scanning electron microscopy (SEM) has contributed to elucidating the ultrastructure of bio-specimens in three dimensions. SEM imagery detects several kinds of signals, of which secondary electrons (SEs) and backscattered electrons (BSEs) are the main electrons used in biological and biomedical research. SE and BSE signals provide a three-dimensional (3D) surface topography and information on the composition of specimens, respectively. Among the various sample preparation techniques for SE-mode SEM, the osmium maceration method is the only approach for examining the subcellular structure that does not require any reconstruction processes. The 3D ultrastructure of organelles, such as the Golgi apparatus, mitochondria, and endoplasmic reticulum has been uncovered using high-resolution SEM of osmium-macerated tissues. Recent instrumental advances in scanning electron microscopes have broadened the applications of SEM for examining bio-specimens and enabled imaging of resin-embedded tissue blocks and sections using BSE-mode SEM under low-accelerating voltages; such techniques are fundamental to the 3D-SEM methods that are now known as focused ion-beam SEM, serial block-face SEM, and array tomography (i.e., serial section SEM). This technical breakthrough has allowed us to establish an innovative BSE imaging technique called section-face imaging to acquire ultrathin information from resin-embedded tissue sections. In contrast, serial section SEM is a modern 3D imaging technique for creating 3D surface rendering models of cells and organelles from tomographic BSE images of consecutive ultrathin sections embedded in resin. In this article, we introduce our related SEM techniques that use SE and BSE signals, such as the osmium maceration method, semithin section SEM (section-face imaging of resin-embedded semithin sections), section-face imaging for correlative light and SEM, and serial section SEM, to summarize their applications to neural structure and discuss the future possibilities and directions for these methods.


Author(s):  
Sadra Sadeh ◽  
Claudia Clopath

SummaryTo unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modelling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition-dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images in turn, which was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding, and paves the road to map the perturbome of neuronal networks in future studies.


Sign in / Sign up

Export Citation Format

Share Document