scholarly journals A Systematic Evaluation of Interneuron Morphology Representations for Cell Type Discrimination

2020 ◽  
Vol 18 (4) ◽  
pp. 591-609 ◽  
Author(s):  
Sophie Laturnus ◽  
Dmitry Kobak ◽  
Philipp Berens

Abstract Quantitative analysis of neuronal morphologies usually begins with choosing a particular feature representation in order to make individual morphologies amenable to standard statistics tools and machine learning algorithms. Many different feature representations have been suggested in the literature, ranging from density maps to intersection profiles, but they have never been compared side by side. Here we performed a systematic comparison of various representations, measuring how well they were able to capture the difference between known morphological cell types. For our benchmarking effort, we used several curated data sets consisting of mouse retinal bipolar cells and cortical inhibitory neurons. We found that the best performing feature representations were two-dimensional density maps, two-dimensional persistence images and morphometric statistics, which continued to perform well even when neurons were only partially traced. Combining these feature representations together led to further performance increases suggesting that they captured non-redundant information. The same representations performed well in an unsupervised setting, implying that they can be suitable for dimensionality reduction or clustering.

2019 ◽  
Author(s):  
Sophie Laturnus ◽  
Dmitry Kobak ◽  
Philipp Berens

AbstractQuantitative analysis of neuronal morphologies usually begins with choosing a particular feature representation in order to make individual morphologies amenable to standard statistics tools and machine learning algorithms. Many different feature representations have been suggested in the literature, ranging from density maps to intersection profiles, but they have never been compared side by side. Here we performed a systematic comparison of various representations, measuring how well they were able to capture the difference between known morphological cell types. For our benchmarking effort, we used several curated data sets consisting of mouse retinal bipolar cells and cortical inhibitory neurons. We found that the best performing feature representations were two-dimensional density maps closely followed by morphometric statistics, which both continued to perform well even when neurons were only partially traced. The same representations performed well in an unsupervised setting, implying that they can be suitable for dimensionality reduction or clustering.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Yun Yang ◽  
Xiaofang Liu ◽  
Qiongwei Ye ◽  
Dapeng Tao

As an important application in video surveillance, person reidentification enables automatic tracking of a pedestrian through different disjointed camera views. It essentially focuses on extracting or learning feature representations followed by a matching model using a distance metric. In fact, person reidentification is a difficult task because, first, no universal feature representation can perfectly identify the amount of pedestrians in the gallery obtained by a multicamera system. Although different features can be fused into a composite representation, the fusion still does not fully explore the difference, complementarity, and importance between different features. Second, a matching model always has a limited amount of training samples to learn a distance metric for matching probe images against a gallery, which certainly results in an unstable learning process and poor matching result. In this paper, we address the issues of person reidentification by the ensemble theory, which explores the importance of different feature representations, and reconcile several matching models on different feature representations to an optimal one via our proposed weighting scheme. We have carried out the simulation on two well-recognized person reidentification benchmark datasets: VIPeR and ETHZ. The experimental results demonstrate that our approach achieves state-of-the-art performance.


Author(s):  
H.A. Cohen ◽  
T.W. Jeng ◽  
W. Chiu

This tutorial will discuss the methodology of low dose electron diffraction and imaging of crystalline biological objects, the problems of data interpretation for two-dimensional projected density maps of glucose embedded protein crystals, the factors to be considered in combining tilt data from three-dimensional crystals, and finally, the prospects of achieving a high resolution three-dimensional density map of a biological crystal. This methodology will be illustrated using two proteins under investigation in our laboratory, the T4 DNA helix destabilizing protein gp32*I and the crotoxin complex crystal.


2019 ◽  
Vol 14 (5) ◽  
pp. 406-421 ◽  
Author(s):  
Ting-He Zhang ◽  
Shao-Wu Zhang

Background: Revealing the subcellular location of a newly discovered protein can bring insight into their function and guide research at the cellular level. The experimental methods currently used to identify the protein subcellular locations are both time-consuming and expensive. Thus, it is highly desired to develop computational methods for efficiently and effectively identifying the protein subcellular locations. Especially, the rapidly increasing number of protein sequences entering the genome databases has called for the development of automated analysis methods. Methods: In this review, we will describe the recent advances in predicting the protein subcellular locations with machine learning from the following aspects: i) Protein subcellular location benchmark dataset construction, ii) Protein feature representation and feature descriptors, iii) Common machine learning algorithms, iv) Cross-validation test methods and assessment metrics, v) Web servers. Result & Conclusion: Concomitant with a large number of protein sequences generated by highthroughput technologies, four future directions for predicting protein subcellular locations with machine learning should be paid attention. One direction is the selection of novel and effective features (e.g., statistics, physical-chemical, evolutional) from the sequences and structures of proteins. Another is the feature fusion strategy. The third is the design of a powerful predictor and the fourth one is the protein multiple location sites prediction.


AI ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 195-208
Author(s):  
Gabriel Dahia ◽  
Maurício Pamplona Segundo

We propose a method that can perform one-class classification given only a small number of examples from the target class and none from the others. We formulate the learning of meaningful features for one-class classification as a meta-learning problem in which the meta-training stage repeatedly simulates one-class classification, using the classification loss of the chosen algorithm to learn a feature representation. To learn these representations, we require only multiclass data from similar tasks. We show how the Support Vector Data Description method can be used with our method, and also propose a simpler variant based on Prototypical Networks that obtains comparable performance, indicating that learning feature representations directly from data may be more important than which one-class algorithm we choose. We validate our approach by adapting few-shot classification datasets to the few-shot one-class classification scenario, obtaining similar results to the state-of-the-art of traditional one-class classification, and that improves upon that of one-class classification baselines employed in the few-shot setting.


Author(s):  
Jianping Fan ◽  
Jing Wang ◽  
Meiqin Wu

The two-dimensional belief function (TDBF = (mA, mB)) uses a pair of ordered basic probability distribution functions to describe and process uncertain information. Among them, mB includes support degree, non-support degree and reliability unmeasured degree of mA. So it is more abundant and reasonable than the traditional discount coefficient and expresses the evaluation value of experts. However, only considering that the expert’s assessment is single and one-sided, we also need to consider the influence between the belief function itself. The difference in belief function can measure the difference between two belief functions, based on which the supporting degree, non-supporting degree and unmeasured degree of reliability of the evidence are calculated. Based on the divergence measure of belief function, this paper proposes an extended two-dimensional belief function, which can solve some evidence conflict problems and is more objective and better solve a class of problems that TDBF cannot handle. Finally, numerical examples illustrate its effectiveness and rationality.


2021 ◽  
Vol 11 (9) ◽  
pp. 4251
Author(s):  
Jinsong Zhang ◽  
Shuai Zhang ◽  
Jianhua Zhang ◽  
Zhiliang Wang

In the digital microfluidic experiments, the droplet characteristics and flow patterns are generally identified and predicted by the empirical methods, which are difficult to process a large amount of data mining. In addition, due to the existence of inevitable human invention, the inconsistent judgment standards make the comparison between different experiments cumbersome and almost impossible. In this paper, we tried to use machine learning to build algorithms that could automatically identify, judge, and predict flow patterns and droplet characteristics, so that the empirical judgment was transferred to be an intelligent process. The difference on the usual machine learning algorithms, a generalized variable system was introduced to describe the different geometry configurations of the digital microfluidics. Specifically, Buckingham’s theorem had been adopted to obtain multiple groups of dimensionless numbers as the input variables of machine learning algorithms. Through the verification of the algorithms, the SVM and BPNN algorithms had classified and predicted the different flow patterns and droplet characteristics (the length and frequency) successfully. By comparing with the primitive parameters system, the dimensionless numbers system was superior in the predictive capability. The traditional dimensionless numbers selected for the machine learning algorithms should have physical meanings strongly rather than mathematical meanings. The machine learning algorithms applying the dimensionless numbers had declined the dimensionality of the system and the amount of computation and not lose the information of primitive parameters.


Author(s):  
Francesco Galofaro

AbstractThe paper presents a semiotic interpretation of the phenomenological debate on the notion of person, focusing in particular on Edmund Husserl, Max Scheler, and Edith Stein. The semiotic interpretation lets us identify the categories that orient the debate: collective/individual and subject/object. As we will see, the phenomenological analysis of the relation between person and social units such as the community, the association, and the mass shows similarities to contemporary socio-semiotic models. The difference between community, association, and mass provides an explanation for the establishment of legal systems. The notion of person we inherit from phenomenology can also be useful in facing juridical problems raised by the use of non-human decision-makers such as machine learning algorithms and artificial intelligence applications.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Jung Eun Huh ◽  
Seunghee Han ◽  
Taeseon Yoon

Abstract Objective In this study we compare the amino acid and codon sequence of SARS-CoV-2, SARS-CoV and MERS-CoV using different statistics programs to understand their characteristics. Specifically, we are interested in how differences in the amino acid and codon sequence can lead to different incubation periods and outbreak periods. Our initial question was to compare SARS-CoV-2 to different viruses in the coronavirus family using BLAST program of NCBI and machine learning algorithms. Results The result of experiments using BLAST, Apriori and Decision Tree has shown that SARS-CoV-2 had high similarity with SARS-CoV while having comparably low similarity with MERS-CoV. We decided to compare the codons of SARS-CoV-2 and MERS-CoV to see the difference. Though the viruses are very alike according to BLAST and Apriori experiments, SVM proved that they can be effectively classified using non-linear kernels. Decision Tree experiment proved several remarkable properties of SARS-CoV-2 amino acid sequence that cannot be found in MERS-CoV amino acid sequence. The consequential purpose of this paper is to minimize the damage on humanity from SARS-CoV-2. Hence, further studies can be focused on the comparison of SARS-CoV-2 virus with other viruses that also can be transmitted during latent periods.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Shao-Zhen Lin ◽  
Wu-Yang Zhang ◽  
Dapeng Bi ◽  
Bo Li ◽  
Xi-Qiao Feng

AbstractInvestigation of energy mechanisms at the collective cell scale is a challenge for understanding various biological processes, such as embryonic development and tumor metastasis. Here we investigate the energetics of self-sustained mesoscale turbulence in confluent two-dimensional (2D) cell monolayers. We find that the kinetic energy and enstrophy of collective cell flows in both epithelial and non-epithelial cell monolayers collapse to a family of probability density functions, which follow the q-Gaussian distribution rather than the Maxwell–Boltzmann distribution. The enstrophy scales linearly with the kinetic energy as the monolayer matures. The energy spectra exhibit a power-decaying law at large wavenumbers, with a scaling exponent markedly different from that in the classical 2D Kolmogorov–Kraichnan turbulence. These energetic features are demonstrated to be common for all cell types on various substrates with a wide range of stiffness. This study provides unique clues to understand active natures of cell population and tissues.


Sign in / Sign up

Export Citation Format

Share Document