nonparametric method
Recently Published Documents


TOTAL DOCUMENTS

245
(FIVE YEARS 55)

H-INDEX

22
(FIVE YEARS 2)

2021 ◽  
Vol 15 (1) ◽  
pp. 280-288
Author(s):  
Mahdi Rezapour ◽  
Khaled Ksaibati

Background: Kernel-based methods have gained popularity as employed model residual’s distribution might not be defined by any classical parametric distribution. Kernel-based method has been extended to estimate conditional densities instead of conditional distributions when data incorporate both discrete and continuous attributes. The method often has been based on smoothing parameters to use optimal values for various attributes. Thus, in case of an explanatory variable being independent of the dependent variable, that attribute would be dropped in the nonparametric method by assigning a large smoothing parameter, giving them uniform distributions so their variances to the model’s variance would be minimal. Objectives: The objective of this study was to identify factors to the severity of pedestrian crashes based on an unbiased method. Especially, this study was conducted to evaluate the applicability of kernel-based techniques of semi- and nonparametric methods on the crash dataset by means of confusion techniques. Methods: In this study, two non- and semi-parametric kernel-based methods were implemented to model the severity of pedestrian crashes. The estimation of the semi-parametric densities is based on the adoptive local smoothing and maximization of the quasi-likelihood function, which is similar somehow to the likelihood of the binary logit model. On the other hand, the nonparametric method is based on the selection of optimal smoothing parameters in estimation of the conditional probability density function to minimize mean integrated squared error (MISE). The performances of those models are evaluated by their prediction power. To have a benchmark for comparison, the standard logistic regression was also employed. Although those methods have been employed in other fields, this is one of the earliest studies that employed those techniques in the context of traffic safety. Results: The results highlighted that the nonparametric kernel-based method outperforms the semi-parametric (single-index model) and the standard logit model based on the confusion matrices. To have a vision about the bandwidth selection method for removal of the irrelevant attributes in nonparametric approach, we added some noisy predictors to the models and a comparison was made. Extensive discussion has been made in the content of this study regarding the methodological approach of the models. Conclusion: To summarize, alcohol and drug involvement, driving on non-level grade, and bad lighting conditions are some of the factors that increase the likelihood of pedestrian crash severity. This is one of the earliest studies that implemented the methods in the context of transportation problems. The nonparametric method is especially recommended to be used in the field of traffic safety when there are uncertainties regarding the importance of predictors as the technique would automatically drop unimportant predictors.


Author(s):  
Chady Ghnatios ◽  
Anais Barasinski

AbstractA nonparametric method assessing the error and variability margins in solutions depicted in a separated form using experimental results is illustrated in this work. The method assess the total variability of the solution including the modeling error and the truncation error when experimental results are available. The illustrated method is based on the use of the PGD separated form solutions, enriched by transforming a part of the PGD basis vectors into probabilistic one. The constructed probabilistic vectors are restricted to the physical solution’s Stiefel manifold. The result is a real-time parametric PGD solution enhanced with the solution variability and the confidence intervals.


PLoS Genetics ◽  
2021 ◽  
Vol 17 (7) ◽  
pp. e1009697
Author(s):  
Geyu Zhou ◽  
Hongyu Zhao

Genetic prediction of complex traits has great promise for disease prevention, monitoring, and treatment. The development of accurate risk prediction models is hindered by the wide diversity of genetic architecture across different traits, limited access to individual level data for training and parameter tuning, and the demand for computational resources. To overcome the limitations of the most existing methods that make explicit assumptions on the underlying genetic architecture and need a separate validation data set for parameter tuning, we develop a summary statistics-based nonparametric method that does not rely on validation datasets to tune parameters. In our implementation, we refine the commonly used likelihood assumption to deal with the discrepancy between summary statistics and external reference panel. We also leverage the block structure of the reference linkage disequilibrium matrix for implementation of a parallel algorithm. Through simulations and applications to twelve traits, we show that our method is adaptive to different genetic architectures, statistically robust, and computationally efficient. Our method is available at https://github.com/eldronzhou/SDPR.


2021 ◽  
Vol 3 (1) ◽  
pp. 47-54
Author(s):  
Nor Adilah Mohamad Nor Azman ◽  
Nor Aishah Ahad ◽  
Friday Zinzendoff Okwonu

Moses test is a nonparametric method to test the equality of two dispersion parameters. The Moses test does not assume equality of location parameters, and this fact gives the test wider applicability. However, this test is inefficient since different people applying the test will obtain different values because of a random process. One sub-division may lead to significant results where another does not. To overcome the problem of uniqueness of the result, this study proposed to modify the random selection of the observation for the subsamples based on the ranking procedure to lead for a unique result for each solution. The original and modified Moses test were tested on the same data set. The finding shows that the result for both tests is similar in terms of decision and conclusion. The analysis revealed that the modified Moses test based on ranking approach has a smaller sum of squared values compared to the original Moses test. Thus, the variability of data for each subsample is decreased as well. Ranking approach can be used as an alternative to replacing the random procedure of selecting observations for subsample to overcome the problem of uniqueness in the test statistic.


2021 ◽  
Vol 28 (1) ◽  
pp. 121-134
Author(s):  
HIRLAV COSTIN

The Căliman Mountains are the highest volcanic mountains in Romania, being positioned on the western side of the Eastern Carpathians, between their central strip (north and east), south – the Harghita Mountains, and west – the Transylvanian Depression. This positioning gives special features of the water drainage, with both spatial and temporal differentiations. This paper analyzed the trend of average drainage from rivers in the studied group for the period 1950-2010, both multi-annually and seasonally and in the extreme months; the months taken into account being those with the lowest flows (January), respectively the largest (May). To evaluate the mentioned parameters, we used the help of Excel MAKESENS (Mann-Kendall test for trend and Sen’s slope estimates), which identified the type of drainage trend (positive or negative), and using the Sen nonparametric method to estimate the slope of the trend. Based on the type of trend obtained, 9 trend classes were obtained, and with the help of the slope, the net change rate was obtained.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Chao Lu ◽  
Haifang Cheng

Data envelopment analysis (DEA) is a nonparametric method for evaluating the relative efficiency of a set of decision-making units (DMUs) with multiple inputs and outputs. As an extension of the DEA, a multiplicative two-stage DEA model has been widely used to measure the efficiencies of two-stage systems, where the first stage uses inputs to produce the outputs, and the second stage then uses the first-stage outputs as inputs to generate its own outputs. The main deficiency of the multiplicative two-stage DEA model is that the decomposition of the overall efficiency may not be unique because of the presence of alternate optima. To remove the problem of the flexible decomposition, in this paper, we maximize the sum of the two-stage efficiencies and simultaneously maximize the two-stage efficiencies as secondary goals in the multiplicative two-stage DEA model to select the decomposition of the overall efficiency from the flexible decompositions, respectively. The proposed models are applied to evaluate the performance of 10 branches of China Construction Bank, and the results are compared with the results of the existing models.


Author(s):  
Ying Long ◽  
Yimeng Song ◽  
Long Chen

Urban spatial structure, which is primarily defined as the spatial distribution of employment and residences, has been of lasting interest to urban economists, geographers, and planners for good reason. This paper proposes a nonparametric method that combines the Jenks natural break method and the Moran’s I to identify a city’s polycentric structure using point-of-interest density. Specifically, a polycentric city consists of one main center and at least one subcenter. A qualified (sub)center should have a significantly higher density of human activity than its immediate surroundings (locally high) and a relatively higher density than all the other subareas in the city (globally high). Treating Chinese cities as the subject, we ultimately identified 70 cities with polycentric structures from 284 prefecture-level cities in China. In addition, regression analyses were conducted to reveal the predictors of polycentricity among the subjects. The regression results indicate that the total population, GDP, average wage, and urban land area of a city all significantly predict polycentricity. As a whole, this paper provides an alternative and transferrable method for identifying main centers and subcenters across cities and to reveal common predictors of polycentricity. The proposed method avoids some of the potential problems in the conventional approach, such as the arbitrariness of thres hold. setting and sensitivity to spatial scales. It can also be replicated rather conveniently, as its input data, such as point-of-interest data, are widely available to the public and the data’s validity can be efficiently checked by field trips or other traditional data sources, such as land-use maps or censuses.


2021 ◽  
Vol 120 (3) ◽  
pp. 187a
Author(s):  
Sina Jazani ◽  
Ioannis Sgouralis ◽  
Douglas P. Shepherd ◽  
Steve Pressé

Sign in / Sign up

Export Citation Format

Share Document