On Combining Clusterwise Linear Regression and K-Means with Automatic Weighting of the Explanatory Variables

Author(s):  
Ricardo A. M. da Silva ◽  
Francisco de A. T. de Carvalho
1988 ◽  
Vol 5 (2) ◽  
pp. 249-282 ◽  
Author(s):  
Wayne S. DeSarbo ◽  
William L. Cron

2020 ◽  
Vol 1 (4) ◽  
pp. 140-147
Author(s):  
Dastan Maulud ◽  
Adnan M. Abdulazeez

Perhaps one of the most common and comprehensive statistical and machine learning algorithms are linear regression. Linear regression is used to find a linear relationship between one or more predictors. The linear regression has two types: simple regression and multiple regression (MLR). This paper discusses various works by different researchers on linear regression and polynomial regression and compares their performance using the best approach to optimize prediction and precision. Almost all of the articles analyzed in this review is focused on datasets; in order to determine a model's efficiency, it must be correlated with the actual values obtained for the explanatory variables.


Author(s):  
Paolo Giudici

Several classes of computational and statistical methods for data mining are available. Each class can be parameterised so that models within the class differ in terms of such parameters (see, for instance, Giudici, 2003; Hastie et al., 2001; Han & Kamber, 2000; Hand et al., 2001; Witten & Frank, 1999): for example, the class of linear regression models, which differ in the number of explanatory variables; the class of Bayesian networks, which differ in the number of conditional dependencies (links in the graph); the class of tree models, which differ in the number of leaves; and the class multi-layer perceptrons, which differ in terms of the number of hidden strata and nodes. Once a class of models has been established the problem is to choose the “best” model from it.


Author(s):  
Napsu Karmitsa ◽  
Sona Taheri ◽  
Adil Bagirov ◽  
Pauliina Makinen

2018 ◽  
Vol 7 (2) ◽  
pp. 146
Author(s):  
Silvi Qemo ◽  
Eahab Elsaid

The purpose of this study is to derive a multiple linear regression model of the CAPM. More specifically, to test for other potential explanatory variables that can be added to the basic linear regression model for the expected returns on Apple Inc. The following explanatory variables were examined: share volume, outstanding shares, closing bid/ask spread, high/low spread and average spread. Using daily returns of Apple Inc. stock from 2007 till 2014 we were able to create a multiple linear regression model of CAPM that increase the R2 value from the basic linear regression model and enhances the amount of variability in the returns on an asset. This is an important modification that can help better forecast returns on assets.Keywords: CAPM; multiple linear regression model; average spread; variability in the returns


Computing ◽  
1979 ◽  
Vol 22 (4) ◽  
pp. 367-373 ◽  
Author(s):  
H. Späth

Sign in / Sign up

Export Citation Format

Share Document