scholarly journals Distributional Transformation Improves Decoding Accuracy When Predicting Chronological Age From Structural MRI

2020 ◽  
Vol 11 ◽  
Author(s):  
Joram Soch

When predicting a certain subject-level variable (e.g., age in years) from measured biological data (e.g., structural MRI scans), the decoding algorithm does not always preserve the distribution of the variable to predict. In such a situation, distributional transformation (DT), i.e., mapping the predicted values to the variable's distribution in the training data, might improve decoding accuracy. Here, we tested the potential of DT within the 2019 Predictive Analytics Competition (PAC) which aimed at predicting chronological age of adult human subjects from structural MRI data. In a low-dimensional setting, i.e., with less features than observations, we applied multiple linear regression, support vector regression and deep neural networks for out-of-sample prediction of subject age. We found that (i) when the number of features is low, no method outperforms linear regression; and (ii) except when using deep regression, distributional transformation increases decoding performance, reducing the mean absolute error (MAE) by about half a year. We conclude that DT can be advantageous when predicting variables that are non-controlled, but have an underlying distribution in healthy or diseased populations.

2020 ◽  
Author(s):  
Joram Soch

ABSTRACTWhen predicting a certain subject-level variable (e.g. age in years) from measured biological data (e.g. structural MRI scans), the decoding algorithm does not always preserve the distribution of the variable to predict. In such a situation, distributional transformation (DT), i.e. mapping the predicted values to the variable’s distribution in the training data, might improve decoding accuracy. Here, we tested the potential of DT within the 2019 Predictive Analytics Competition (PAC) which aimed at predicting chronological age of adult human subjects from structural MRI data. In a low-dimensional setting, i.e. with less features than observations, we applied multiple linear regression, support vector regression and deep neural networks for out-of-sample prediction of subject age. We found that (i) when the number of features is low, no method outperforms linear regression; and (ii) except when using deep regression, distributional transformation increases decoding performance, reducing the mean absolute error (MAE) by about half a year. We conclude that DT can be advantageous when predicting variables that are non-controlled, but have an underlying distribution in healthy or diseased populations.


2021 ◽  
Vol 30 (4) ◽  
Author(s):  
Heather Kennedy ◽  
Thilo Kunkel ◽  
Daniel Funk

As social media becomes an increasingly dominant and important component of sport organizations’ marketing and communication strategies, effective marketing measurement techniques are required. Using social media data of a Division I football team, this research demonstrates how predictive analytics can be used as a marketing measurement tool. A support vector machine model was compared to a standard linear regression with respect to accurately predicting Facebook posts’ total interactions. The predictive model was used as (i) a planning tool to forecast future post engagement based on a variety of post characteristics and (ii) an evaluation tool of a marketing campaign by providing accurate benchmarks to compare against achieved engagement metrics. Results indicated the support vector machine model outperformed the standard linear regression and the marketing campaign was unsuccessful in achieving its goals. This research provides a foundation for future use of predictive analytics in social media and sport management scholarship


machine in mathematical pendulum experiments to find the value of gravity. There were 4 data obtained from mathematical pendulum experiments which were then interpolated to obtain more data (13 data), then the data was used as training data for each model. Each model is tested to get a gravity value of 26 including training data, then compared with reference gravity values [17,18,19]. The results of the model Neural network proved to be the most accurate with an error value of 2.53%. The support vector machine model is the most accurate model with a standard deviation value of 0.03 and the error deviation of 0.058 is the smallest value of the three models in this paper.


2016 ◽  
Vol 136 (12) ◽  
pp. 898-907 ◽  
Author(s):  
Joao Gari da Silva Fonseca Junior ◽  
Hideaki Ohtake ◽  
Takashi Oozeki ◽  
Kazuhiko Ogimoto

Author(s):  
Jianfeng Jiang

Objective: In order to diagnose the analog circuit fault correctly, an analog circuit fault diagnosis approach on basis of wavelet-based fractal analysis and multiple kernel support vector machine (MKSVM) is presented in the paper. Methods: Time responses of the circuit under different faults are measured, and then wavelet-based fractal analysis is used to process the collected time responses for the purpose of generating features for the signals. Kernel principal component analysis (KPCA) is applied to reduce the features’ dimensionality. Afterwards, features are divided into training data and testing data. MKSVM with its multiple parameters optimized by chaos particle swarm optimization (CPSO) algorithm is utilized to construct an analog circuit fault diagnosis model based on the testing data. Results: The proposed analog diagnosis approach is revealed by a four opamp biquad high-pass filter fault diagnosis simulation. Conclusion: The approach outperforms other commonly used methods in the comparisons.


2020 ◽  
Vol 10 (11) ◽  
pp. 3817
Author(s):  
Soheil Keshmiri ◽  
Masahiro Shiomi ◽  
Kodai Shatani ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

A prevailing assumption in many behavioral studies is the underlying normal distribution of the data under investigation. In this regard, although it appears plausible to presume a certain degree of similarity among individuals, this presumption does not necessarily warrant such simplifying assumptions as average or normally distributed human behavioral responses. In the present study, we examine the extent of such assumptions by considering the case of human–human touch interaction in which individuals signal their face area pre-touch distance boundaries. We then use these pre-touch distances along with their respective azimuth and elevation angles around the face area and perform three types of regression-based analyses to estimate a generalized facial pre-touch distance boundary. First, we use a Gaussian processes regression to evaluate whether assumption of normal distribution in participants’ reactions warrants a reliable estimate of this boundary. Second, we apply a support vector regression (SVR) to determine whether estimating this space by minimizing the orthogonal distance between participants’ pre-touch data and its corresponding pre-touch boundary can yield a better result. Third, we use ordinary regression to validate the utility of a non-parametric regressor with a simple regularization criterion in estimating such a pre-touch space. In addition, we compare these models with the scenarios in which a fixed boundary distance (i.e., a spherical boundary) is adopted. We show that within the context of facial pre-touch interaction, normal distribution does not capture the variability that is exhibited by human subjects during such non-verbal interaction. We also provide evidence that such interactions can be more adequately estimated by considering the individuals’ variable behavior and preferences through such estimation strategies as ordinary regression that solely relies on the distribution of their observed behavior which may not necessarily follow a parametric distribution.


Author(s):  
M. Tanveer ◽  
Tarun Gupta ◽  
Miten Shah ◽  

Twin Support Vector Clustering (TWSVC) is a clustering algorithm inspired by the principles of Twin Support Vector Machine (TWSVM). TWSVC has already outperformed other traditional plane based clustering algorithms. However, TWSVC uses hinge loss, which maximizes shortest distance between clusters and hence suffers from noise-sensitivity and low re-sampling stability. In this article, we propose Pinball loss Twin Support Vector Clustering (pinTSVC) as a clustering algorithm. The proposed pinTSVC model incorporates the pinball loss function in the plane clustering formulation. Pinball loss function introduces favorable properties such as noise-insensitivity and re-sampling stability. The time complexity of the proposed pinTSVC remains equivalent to that of TWSVC. Extensive numerical experiments on noise-corrupted benchmark UCI and artificial datasets have been provided. Results of the proposed pinTSVC model are compared with TWSVC, Twin Bounded Support Vector Clustering (TBSVC) and Fuzzy c-means clustering (FCM). Detailed and exhaustive comparisons demonstrate the better performance and generalization of the proposed pinTSVC for noise-corrupted datasets. Further experiments and analysis on the performance of the above-mentioned clustering algorithms on structural MRI (sMRI) images taken from the ADNI database, face clustering, and facial expression clustering have been done to demonstrate the effectiveness and feasibility of the proposed pinTSVC model.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2503
Author(s):  
Taro Suzuki ◽  
Yoshiharu Amano

This paper proposes a method for detecting non-line-of-sight (NLOS) multipath, which causes large positioning errors in a global navigation satellite system (GNSS). We use GNSS signal correlation output, which is the most primitive GNSS signal processing output, to detect NLOS multipath based on machine learning. The shape of the multi-correlator outputs is distorted due to the NLOS multipath. The features of the shape of the multi-correlator are used to discriminate the NLOS multipath. We implement two supervised learning methods, a support vector machine (SVM) and a neural network (NN), and compare their performance. In addition, we also propose an automated method of collecting training data for LOS and NLOS signals of machine learning. The evaluation of the proposed NLOS detection method in an urban environment confirmed that NN was better than SVM, and 97.7% of NLOS signals were correctly discriminated.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Prasanna Date ◽  
Davis Arthur ◽  
Lauren Pusey-Nazzaro

AbstractTraining machine learning models on classical computers is usually a time and compute intensive process. With Moore’s law nearing its inevitable end and an ever-increasing demand for large-scale data analysis using machine learning, we must leverage non-conventional computing paradigms like quantum computing to train machine learning models efficiently. Adiabatic quantum computers can approximately solve NP-hard problems, such as the quadratic unconstrained binary optimization (QUBO), faster than classical computers. Since many machine learning problems are also NP-hard, we believe adiabatic quantum computers might be instrumental in training machine learning models efficiently in the post Moore’s law era. In order to solve problems on adiabatic quantum computers, they must be formulated as QUBO problems, which is very challenging. In this paper, we formulate the training problems of three machine learning models—linear regression, support vector machine (SVM) and balanced k-means clustering—as QUBO problems, making them conducive to be trained on adiabatic quantum computers. We also analyze the computational complexities of our formulations and compare them to corresponding state-of-the-art classical approaches. We show that the time and space complexities of our formulations are better (in case of SVM and balanced k-means clustering) or equivalent (in case of linear regression) to their classical counterparts.


Sign in / Sign up

Export Citation Format

Share Document