A Double-Layer, Multi-Resolution Classification Model for Decoding Spatiotemporal Patterns of Spikes with Small Sample Size

2021 ◽  
pp. 1-36
Author(s):  
Xiwei She ◽  
Theodore W. Berger ◽  
Dong Song

Abstract We build a double-layer, multiple temporal-resolution classification model for decoding single-trial spatiotemporal patterns of spikes. The model takes spiking activities as input signals and binary behavioral or cognitive variables as output signals and represents the input-output mapping with a double-layer ensemble classifier. In the first layer, to solve the underdetermined problem caused by the small sample size and the very high dimensionality of input signals, B-spline functional expansion and L1-regularized logistic classifiers are used to reduce dimensionality and yield sparse model estimations. A wide range of temporal resolutions of neural features is included by using a large number of classifiers with different numbers of B-spline knots. Each classifier serves as a base learner to classify spatiotemporal patterns into the probability of the output label with a single temporal resolution. A bootstrap aggregating strategy is used to reduce the estimation variances of these classifiers. In the second layer, another L1-regularized logistic classifier takes outputs of first-layer classifiers as inputs to generate the final output predictions. This classifier serves as a meta-learner that fuses multiple temporal resolutions to classify spatiotemporal patterns of spikes into binary output labels. We test this decoding model with both synthetic and experimental data recorded from rats and human subjects performing memory-dependent behavioral tasks. Results show that this method can effectively avoid overfitting and yield accurate prediction of output labels with small sample size. The double-layer, multi-resolution classifier consistently outperforms the best single-layer, single-resolution classifier by extracting and utilizing multi-resolution spatiotemporal features of spike patterns in the classification.

2020 ◽  
Vol 57 (2) ◽  
pp. 237-251
Author(s):  
Achilleas Anastasiou ◽  
Alex Karagrigoriou ◽  
Anastasios Katsileros

SummaryThe normal distribution is considered to be one of the most important distributions, with numerous applications in various fields, including the field of agricultural sciences. The purpose of this study is to evaluate the most popular normality tests, comparing the performance in terms of the size (type I error) and the power against a large spectrum of distributions with simulations for various sample sizes and significance levels, as well as through empirical data from agricultural experiments. The simulation results show that the power of all normality tests is low for small sample size, but as the sample size increases, the power increases as well. Also, the results show that the Shapiro–Wilk test is powerful over a wide range of alternative distributions and sample sizes and especially in asymmetric distributions. Moreover the D’Agostino–Pearson Omnibus test is powerful for small sample sizes against symmetric alternative distributions, while the same is true for the Kurtosis test for moderate and large sample sizes.


Algorithms ◽  
2019 ◽  
Vol 12 (8) ◽  
pp. 160 ◽  
Author(s):  
Mohammad Wedyan ◽  
Alessandro Crippa ◽  
Adel Al-Jumaily

Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples.


2012 ◽  
Vol 3 (2) ◽  
pp. 78-106 ◽  
Author(s):  
David Rajakovich

BackgroundThis paper presents the outcome of a study conducted at the Royal Devon and Exeter Hospital in which a prediction market was established in order to forecast demand for services. To the researcher’s knowledge, it does not appear that prediction markets have been previously utilized in a healthcare environment.PurposesThe purpose of this study is to provide evidence for the effective use of prediction markets in a healthcare environment.Methodology and ApproachThe study was conducted over a period of one week, and involved sixty-five participants. Each was asked to provide an estimate for demand for services at the Royal Devon and Exeter Hospital. Characteristics gathered for each participant included level of education, occupation, directorate, number of years worked for the hospital, and number of years worked for the National Health Service. FindingsThe study confirms the effectiveness of prediction markets to forecast future events as overall hospital demand was forecasted with an error of only 0.3%. The prediction market was less successful in predicting demand for services for each department, which the researcher attributes to the small sample size and lack of diversity of participants. Additionally, only a very small percentage of the characteristics captured registered a statistically significant correlation with the accuracy of the estimate. Further studies should focus on different characteristics and/or use a larger sample size to either confirm or refute the existence of such characteristics. Practical Implications The findings of this work could potentially be used as an innovative way to augment the forecasting function for a wide range of healthcare facilities. With the preliminary success of this study to forecast demand, further research in the field is warranted.


Author(s):  
Mohammad Sultan Mahmud ◽  
Joshua Zhexue Huang ◽  
Xianghua Fu

Classification problems in which the number of features (dimensions) is unduly higher than the number of samples (observations) is an essential research and application area in a variety of domains, especially in computational biology. It is also known as a high-dimensional small-sample-size (HDSSS) problem. Various dimensionality reduction methods have been developed, but they are not potent with the small-sample-sized high-dimensional datasets and suffer from overfitting and high-variance gradients. To overcome the pitfalls of sample size and dimensionality, this study employed variational autoencoder (VAE), which is a dynamic framework for unsupervised learning in recent years. The objective of this study is to investigate a reliable classification model for high-dimensional and small-sample-sized datasets with minimal error. Moreover, it evaluated the strength of different architectures of VAE on the HDSSS datasets. In the experiment, six genomic microarray datasets from Kent Ridge Biomedical Dataset Repository were selected, and several choices of dimensions (features) were applied for data preprocessing. Also, to evaluate the classification accuracy and to find a stable and suitable classifier, nine state-of-the-art classifiers that have been successful for classification tasks in high-dimensional data settings were selected. The experimental results demonstrate that the VAE can provide superior performance compared to traditional methods such as PCA, fastICA, FA, NMF, and LDA in terms of accuracy and AUROC.


2019 ◽  
pp. 40-46 ◽  
Author(s):  
V.V. Savchenko ◽  
A.V. Savchenko

We consider the task of automated quality control of sound recordings containing voice samples of individuals. It is shown that in this task the most acute is the small sample size. In order to overcome this problem, we propose the novel method of acoustic measurements based on relative stability of the pitch frequency within a voice sample of short duration. An example of its practical implementation using aninter-periodic accumulation of a speech signal is considered. An experimental study with specially developed software provides statistical estimates of the effectiveness of the proposed method in noisy environments. It is shown that this method rejects the audio recording as unsuitable for a voice biometric identification with a probability of 0,95 or more for a signal to noise ratio below 15 dB. The obtained results are intended for use in the development of new and modifying existing systems of collecting and automated quality control of biometric personal data. The article is intended for a wide range of specialists in the field of acoustic measurements and digital processing of speech signals, as well as for practitioners who organize the work of authorized organizations in preparing for registration samples of biometric personal data.


2020 ◽  
Vol 21 ◽  
Author(s):  
Roberto Gabbiadini ◽  
Eirini Zacharopoulou ◽  
Federica Furfaro ◽  
Vincenzo Craviotto ◽  
Alessandra Zilli ◽  
...  

Background: Intestinal fibrosis and subsequent strictures represent an important burden in inflammatory bowel disease (IBD). The detection and evaluation of the degree of fibrosis in stricturing Crohn’s disease (CD) is important to address the best therapeutic strategy (medical anti-inflammatory therapy, endoscopic dilation, surgery). Ultrasound elastography (USE) is a non-invasive technique that has been proposed in the field of IBD for evaluating intestinal stiffness as a biomarker of intestinal fibrosis. Objective: The aim of this review is to discuss the ability and current role of ultrasound elastography in the assessment of intestinal fibrosis. Results and Conclusion: Data on USE in IBD are provided by pilot and proof-of-concept studies with small sample size. The first type of USE investigated was strain elastography, while shear wave elastography has been introduced lately. Despite the heterogeneity of the methods of the studies, USE has been proven to be able to assess intestinal fibrosis in patients with stricturing CD. However, before introducing this technique in current practice, further studies with larger sample size and homogeneous parameters, testing reproducibility, and identification of validated cut-off values are needed.


Sign in / Sign up

Export Citation Format

Share Document