scholarly journals Extracellular space preservation aids the connectomic analysis of neural circuits

eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Marta Pallotto ◽  
Paul V Watkins ◽  
Boma Fubara ◽  
Joshua H Singer ◽  
Kevin L Briggman

Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.

Author(s):  
Anthony S-Y Leong ◽  
David W Gove

Microwaves (MW) are electromagnetic waves which are commonly generated at a frequency of 2.45 GHz. When dipolar molecules such as water, the polar side chains of proteins and other molecules with an uneven distribution of electrical charge are exposed to such non-ionizing radiation, they oscillate through 180° at a rate of 2,450 million cycles/s. This rapid kinetic movement results in accelerated chemical reactions and produces instantaneous heat. MWs have recently been applied to a wide range of procedures for light microscopy. MWs generated by domestic ovens have been used as a primary method of tissue fixation, it has been applied to the various stages of tissue processing as well as to a wide variety of staining procedures. This use of MWs has not only resulted in drastic reductions in the time required for tissue fixation, processing and staining, but have also produced better cytologic images in cryostat sections, and more importantly, have resulted in better preservation of cellular antigens.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Alexey A. Polilov ◽  
Anastasia A. Makarova ◽  
Song Pang ◽  
C. Shan Xu ◽  
Harald Hess

AbstractModern morphological and structural studies are coming to a new level by incorporating the latest methods of three-dimensional electron microscopy (3D-EM). One of the key problems for the wide usage of these methods is posed by difficulties with sample preparation, since the methods work poorly with heterogeneous (consisting of tissues different in structure and in chemical composition) samples and require expensive equipment and usually much time. We have developed a simple protocol allows preparing heterogeneous biological samples suitable for 3D-EM in a laboratory that has a standard supply of equipment and reagents for electron microscopy. This protocol, combined with focused ion-beam scanning electron microscopy, makes it possible to study 3D ultrastructure of complex biological samples, e.g., whole insect heads, over their entire volume at the cellular and subcellular levels. The protocol provides new opportunities for many areas of study, including connectomics.


2019 ◽  
Vol 141 (9) ◽  
Author(s):  
Daniel M. Probst ◽  
Mandhapati Raju ◽  
Peter K. Senecal ◽  
Janardhan Kodavasal ◽  
Pinaki Pal ◽  
...  

This work evaluates different optimization algorithms for computational fluid dynamics (CFD) simulations of engine combustion. Due to the computational expense of CFD simulations, emulators built with machine learning algorithms were used as surrogates for the optimizers. Two types of emulators were used: a Gaussian process (GP) and a weighted variety of machine learning methods called SuperLearner (SL). The emulators were trained using a dataset of 2048 CFD simulations that were run concurrently on a supercomputer. The design of experiments (DOE) for the CFD runs was obtained by perturbing nine input parameters using a Monte-Carlo method. The CFD simulations were of a heavy duty engine running with a low octane gasoline-like fuel at a partially premixed compression ignition mode. Ten optimization algorithms were tested, including types typically used in research applications. Each optimizer was allowed 800 function evaluations and was randomly tested 100 times. The optimizers were evaluated for the median, minimum, and maximum merits obtained in the 100 attempts. Some optimizers required more sequential evaluations, thereby resulting in longer wall clock times to reach an optimum. The best performing optimization methods were particle swarm optimization (PSO), differential evolution (DE), GENOUD (an evolutionary algorithm), and micro-genetic algorithm (GA). These methods found a high median optimum as well as a reasonable minimum optimum of the 100 trials. Moreover, all of these methods were able to operate with less than 100 successive iterations, which reduced the wall clock time required in practice. Two methods were found to be effective but required a much larger number of successive iterations: the DIRECT and MALSCHAINS algorithms. A random search method that completed in a single iteration performed poorly in finding optimum designs but was included to illustrate the limitation of highly concurrent search methods. The last three methods, Nelder–Mead, bound optimization by quadratic approximation (BOBYQA), and constrained optimization by linear approximation (COBYLA), did not perform as well.


Images are the fastest growing content, they contribute significantly to the amount of data generated on the internet every day. Image classification is a challenging problem that social media companies work on vigorously to enhance the user’s experience with the interface. The recent advances in the field of machine learning and computer vision enables personalized suggestions and automatic tagging of images. Convolutional neural network is a hot research topic these days in the field of machine learning. With the help of immensely dense labelled data available on the internet the networks can be trained to recognize the differentiating features among images under the same label. New neural network algorithms are developed frequently that outperform the state-of-art machine learning algorithms. Recent algorithms have managed to produce error rates as low as 3.1%. In this paper the architecture of important CNN algorithms that have gained attention are discussed, analyzed and compared and the concept of transfer learning is used to classify different breeds of dogs..


2017 ◽  
Vol 7 (5) ◽  
pp. 2073-2082 ◽  
Author(s):  
A. G. Armaki ◽  
M. F. Fallah ◽  
M. Alborzi ◽  
A. Mohammadzadeh

Financial institutions are exposed to credit risk due to issuance of consumer loans. Thus, developing reliable credit scoring systems is very crucial for them. Since, machine learning techniques have demonstrated their applicability and merit, they have been extensively used in credit scoring literature. Recent studies concentrating on hybrid models through merging various machine learning algorithms have revealed compelling results. There are two types of hybridization methods namely traditional and ensemble methods. This study combines both of them and comes up with a hybrid meta-learner model. The structure of the model is based on the traditional hybrid model of ‘classification + clustering’ in which the stacking ensemble method is employed in the classification part. Moreover, this paper compares several versions of the proposed hybrid model by using various combinations of classification and clustering algorithms. Hence, it helps us to identify which hybrid model can achieve the best performance for credit scoring purposes. Using four real-life credit datasets, the experimental results show that the model of (KNN-NN-SVMPSO)-(DL)-(DBSCAN) delivers the highest prediction accuracy and the lowest error rates.


10.2196/24418 ◽  
2021 ◽  
Vol 7 (2) ◽  
pp. e24418
Author(s):  
Justin Clark ◽  
Catherine McFarlane ◽  
Gina Cleo ◽  
Christiane Ishikawa Ramos ◽  
Skye Marshall

Background Systematic reviews (SRs) are considered the highest level of evidence to answer research questions; however, they are time and resource intensive. Objective When comparing SR tasks done manually, using standard methods, versus those same SR tasks done using automated tools, (1) what is the difference in time to complete the SR task and (2) what is the impact on the error rate of the SR task? Methods A case study compared specific tasks done during the conduct of an SR on prebiotic, probiotic, and synbiotic supplementation in chronic kidney disease. Two participants (manual team) conducted the SR using current methods, comprising a total of 16 tasks. Another two participants (automation team) conducted the tasks where a systematic review automation (SRA) tool was available, comprising of a total of six tasks. The time taken and error rate of the six tasks that were completed by both teams were compared. Results The approximate time for the manual team to produce a draft of the background, methods, and results sections of the SR was 126 hours. For the six tasks in which times were compared, the manual team spent 2493 minutes (42 hours) on the tasks, compared to 708 minutes (12 hours) spent by the automation team. The manual team had a higher error rate in two of the six tasks—regarding Task 5: Run the systematic search, the manual team made eight errors versus three errors made by the automation team; regarding Task 12: Assess the risk of bias, 25 assessments differed from a reference standard for the manual team compared to 20 differences for the automation team. The manual team had a lower error rate in one of the six tasks—regarding Task 6: Deduplicate search results, the manual team removed one unique study and missed zero duplicates versus the automation team who removed two unique studies and missed seven duplicates. Error rates were similar for the two remaining compared tasks—regarding Task 7: Screen the titles and abstracts and Task 9: Screen the full text, zero relevant studies were excluded by both teams. One task could not be compared between groups—Task 8: Find the full text. Conclusions For the majority of SR tasks where an SRA tool was used, the time required to complete that task was reduced for novice researchers while methodological quality was maintained.


2017 ◽  
Vol 23 (2) ◽  
pp. 537-559
Author(s):  
Péter Gyimesi

Identifying fault-prone code parts is useful for the developers to help reduce the time required for locating bugs. It is usually done by characterizing the already known bugs with certain kinds of metrics and building a predictive model from the data. For the characterization of bugs, software product and process metrics are the most popular ones. The calculation of product metrics is supported by many free and commercial software products. However, tools that are capable of computing process metrics are quite rare. In this study, we present a method of computing software process metrics in a graph database. We describe the schema of the database created and we present a way to readily get the process metrics from it. With this technique, process metrics can be calculated at the file, class and method levels. We used GitHub as the source of the change history and we selected 5 open-source Java projects for processing. To retrieve positional information about the classes and methods, we used SourceMeter, a static source code analyzer tool. We used Neo4j as the graph database engine, and its query language - cypher - to get the process metrics. We published the tools we created as open-source projects on GitHub. To demonstrate the utility of our tools, we selected 25 release versions of the 5 Java projects and calculated the process metrics for all of the source code elements (files, classes and methods) in these versions. Using our previous published bug database, we built bug databases for the selected projects that contain the computed process metrics and the corresponding bug numbers for files and classes. (We published these databases as an online appendix.) Then we applied 13 machine learning algorithms on the database we created to find out if it is feasible for bug prediction purposes. We achieved F-measure values on average of around 0.7 at the class level, and slightly better values of between 0.7 and 0.75 at the file level. The best performing algorithm was the RandomForest method for both cases.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Kazuo Katoh

Microwave irradiation of tissue during fixation and subsequent histochemical staining procedures significantly reduces the time required for incubation in fixation and staining solutions. Minimizing the incubation time in fixative reduces disruption of tissue morphology, and reducing the incubation time in staining solution or antibody solution decreases nonspecific labeling. Reduction of incubation time in staining solution also decreases the level of background noise. Microwave-assisted tissue preparation is applicable for tissue fixation, decalcification of bone tissues, treatment of adipose tissues, antigen retrieval, and other special staining of tissues. Microwave-assisted tissue fixation and staining are useful tools for histological analyses. This review describes the protocols using microwave irradiation for several essential procedures in histochemical studies, and these techniques are applicable to other protocols for tissue fixation and immunostaining in the field of cell biology.


Author(s):  
Daniel M. Probst ◽  
Mandhapati Raju ◽  
Peter K. Senecal ◽  
Janardhan Kodavasal ◽  
Pinaki Pal ◽  
...  

This work evaluates different optimization algorithms for Computational Fluid Dynamics (CFD) simulations of engine combustion. Due to the computational expense of CFD simulations, emulators built with machine learning algorithms were used as surrogates for the optimizers. Two types of emulators were used: a Gaussian Process (GP) and a weighted variety of machine learning methods called SuperLearner (SL). The emulators were trained using a dataset of 2048 CFD simulations that were run concurrently on a supercomputer. The Design of Experiments (DOE) for the CFD runs was obtained by perturbing nine input parameters using a Monte Carlo method. The CFD simulations were of a heavy duty engine running with a low octane gasoline-like fuel at a partially premixed compression ignition mode. Ten optimization algorithms were tested, including types typically used in research applications. Each optimizer was allowed 800 function evaluations and was randomly tested 100 times. The optimizers were evaluated for the median, minimum, and maximum merits obtained in the 100 attempts. Some optimizers required more sequential evaluations, thereby resulting in longer wall clock times to reach an optimum. The best performing optimization methods were particle swarm optimization (PSO), differential evolution (DE), GENOUD (an evolutionary algorithm), and Micro-Genetic Algorithm (GA). These methods found a high median optimum as well as a reasonable minimum optimum of the 100 trials. Moreover, all of these methods were able to operate with less than 100 successive iterations, which reduced the wall clock time required in practice. Two methods were found to be effective but required a much larger number of successive iterations: the DIRECT and MALSCHAINS algorithms. A random search method that completed in a single iteration performed poorly in finding 1 Currently at Southwest Research Institute, San Antonio, Texas optimum designs, but was included to illustrate the limitation of highly concurrent search methods. The last three methods, Nelder-Mead, BOBYQA, and COBYLA, did not perform as well.


2021 ◽  
Author(s):  
Andrea Morger ◽  
Fredrik Svensson ◽  
Staffan Arvidsson McShane ◽  
Niharika Gauraha ◽  
Ulf Norinder ◽  
...  

Abstract Machine learning methods are widely used in drug discovery and toxicity prediction. While showing overall good performance in cross-validation studies, their predictive power (often) drops in cases where the query samples have drifted from the training data’s descriptor space. Thus, the assumption for applying machine learning algorithms, that training and test data stem from the same distribution, might not always be fulfilled. In this work, conformal prediction is used to assess the calibration of the models. Deviations from the expected error may indicate that training and test data originate from different distributions. Exemplified on the Tox21 datasets, composed of chronologically released Tox21Train, Tox21Test and Tox21Score subsets, we observed that while internally valid models could be trained using cross-validation on Tox21Train, predictions on the external Tox21Score data resulted in higher error rates than expected. To improve the prediction on the external sets, a strategy exchanging the calibration set with more recent data, such as Tox21Test, has successfully been introduced. We conclude that conformal prediction can be used to diagnose data drifts and other issues relating to model calibration. The proposed improvement strategy — exchanging the calibration data only — is convenient as it does not require retraining of the underlying model.


Sign in / Sign up

Export Citation Format

Share Document