Improvement of low-rank representation denoising model based on nonconvex penalty function

Author(s):  
Liu Chengshi ◽  
Zhao Zhigang ◽  
Li Qiang
2017 ◽  
Vol 14 (5) ◽  
pp. 1154-1164 ◽  
Author(s):  
Lin Yuan ◽  
Lin Zhu ◽  
Wei-Li Guo ◽  
Xiaobo Zhou ◽  
Youhua Zhang ◽  
...  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Gui Yuan ◽  
Shali Huang ◽  
Jing Fu ◽  
Xinwei Jiang

Purpose This study aims to assess the default risk of borrowers in peer-to-peer (P2P) online lending platforms. The authors propose a novel default risk classification model based on data cleaning and feature extraction, which increases risk assessment accuracy. Design/methodology/approach The authors use borrower data from the Lending Club and propose the risk assessment model based on low-rank representation (LRR) and discriminant analysis. Firstly, the authors use three LRR models to clean the high-dimensional borrower data by removing outliers and noise, and then the authors adopt a discriminant analysis algorithm to reduce the dimension of the cleaned data. In the dimension-reduced feature space, machine learning classifiers including the k-nearest neighbour, support vector machine and artificial neural network are used to assess and classify default risks. Findings The results reveal significant noise and redundancy in the borrower data. LRR models can effectively clean such data, particularly the two LRR models with local manifold regularisation. In addition, the supervised discriminant analysis model, termed the local Fisher discriminant analysis model, can extract low-dimensional and discriminative features, which further increases the accuracy of the final risk assessment models. Originality/value The originality of this study is that it proposes a novel default risk assessment model, based on data cleaning and feature extraction, for P2P online lending platforms. The proposed approach is innovative and efficient in the P2P online lending field.


2020 ◽  
Vol 57 (22) ◽  
pp. 221012
Author(s):  
吕卫 Lü Wei ◽  
李德盛 Li Desheng ◽  
谭浪 Tan Lang ◽  
井佩光 Jing Peiguang ◽  
苏育挺 Su Yuting

2018 ◽  
Vol 55 (7) ◽  
pp. 071002
Author(s):  
褚晶辉 Chu Jinghui ◽  
顾慧敏 Gu Huimin ◽  
苏育挺 Su Yuting

2020 ◽  
Vol 10 ◽  
Author(s):  
Conghai Lu ◽  
Juan Wang ◽  
Jinxing Liu ◽  
Chunhou Zheng ◽  
Xiangzhen Kong ◽  
...  

2018 ◽  
Vol 27 (07) ◽  
pp. 1860013 ◽  
Author(s):  
Swair Shah ◽  
Baokun He ◽  
Crystal Maung ◽  
Haim Schweitzer

Principal Component Analysis (PCA) is a classical dimensionality reduction technique that computes a low rank representation of the data. Recent studies have shown how to compute this low rank representation from most of the data, excluding a small amount of outlier data. We show how to convert this problem into graph search, and describe an algorithm that solves this problem optimally by applying a variant of the A* algorithm to search for the outliers. The results obtained by our algorithm are optimal in terms of accuracy, and are shown to be more accurate than results obtained by the current state-of-the- art algorithms which are shown not to be optimal. This comes at the cost of running time, which is typically slower than the current state of the art. We also describe a related variant of the A* algorithm that runs much faster than the optimal variant and produces a solution that is guaranteed to be near the optimal. This variant is shown experimentally to be more accurate than the current state-of-the-art and has a comparable running time.


Sign in / Sign up

Export Citation Format

Share Document