scholarly journals Towards Repayment Prediction in Peer-to-Peer Social Lending Using Deep Learning

Mathematics ◽  
2019 ◽  
Vol 7 (11) ◽  
pp. 1041 ◽  
Author(s):  
Kim ◽  
Cho

Peer-to-Peer (P2P) lending transactions take place by the lenders choosing a borrower and lending money. It is important to predict whether a borrower can repay because the lenders must bear the credit risk when the borrower defaults, but it is difficult to design feature extractors with very complex information about borrowers and loan products. In this paper, we present an architecture of deep convolutional neural network (CNN) for predicting the repayment in P2P social lending to extract features automatically and improve the performance. CNN is a deep learning model for classifying complex data, which extracts discriminative features automatically by convolution operation on lending data. We classify the borrower’s loan status by capturing the robust features and learning the patterns. Experimental results with 5-fold cross-validation show that our method automatically extracts complex features and is effective in repayment prediction on Lending Club data. In comparison with other machine learning methods, the standard CNN has achieved the highest performance with 75.86%. Exploiting various CNN models such as Inception, ResNet, and Inception-ResNet results in the state-of-the-art performance of 77.78%. We also demonstrate that the features extracted by our model are better performed by projecting the samples into the feature space.

Author(s):  
Harjanto Prabowo ◽  
Tjeng Wawan Cenggoro ◽  
Arif Budiarto ◽  
Anzaludin Samsinga Perbangsa ◽  
Hery Harjono Muljo ◽  
...  

<p>Knowledge Management (KM) system is a core feature in facilitating intellectual growth in organization. However, there are numerous difficulties in maintaining a reliable KM system. One of the challenges is to manage knowledge materials in video format. A video file contains complex data that lead to the difficulties in managing them. Without an intelligent system, managing videos for KM requires a laborious effort. In this paper, an intelligent framework for KM system, embedded with deep learning model, is proposed. The use of the deep learning model alleviates the heavy burden of video materials management in KM system. To enhance the agility of the system, mobile-based deep learning model is utilized in the framework.</p>


High pace rise in Glaucoma, an irreversible eye disease that deteriorates vision capacity of human has alarmed academia-industries to develop a novel and robust Computer Aided Diagnosis (CAD) system for early Glaucomatic eye detection. The main root cause for glaucoma growth depends on its structural alterations in the retina and is very much essential for ophthalmologists to identify it at an initial period to stop its progression. Fundoscopy is among one of the biomedical imaging techniques to analyze the internal structure of retina. Recently, numerous efforts have been made to exploit SpatialTemporal features including morphological values of Optical Disk (OD), Optical Cup (OC), Neuro-Retinal Rim (NRR) etc to perform Glaucoma detection in fundus images. Here, some issues like: suitable pre-processing, precise Region of Interest segmentation, post-segmentation and lack of generalized threshold limits efficacy of the major existing approaches. Furthermore, the optimal segmentation of OD and OC, nerves removal from OD or OC is often tedious and demands more efficient solution. However, these approaches cumulatively turn out to be computationally complex and time-consuming. As potential alternative, deep learning techniques have gained widespread attention, especially for image analysis or vision technologies. With this motive, in this paper, the authors proposed a novel Convolutional Stacked Auto-Encoder (CSAE) assisted Deep Learning Model for Glaucoma Detection and Classification model named GlaucoNet. Unlike classical methods, GlaucoNet applies Stacked Auto-Encoder by using hierarchical CNN structure to perform deep feature extraction and learning. By adapting complex data nature, and large features, GlaucoNet was designed with three layers: convolutional layer (CONV), Max-pool layer (MP) and two Fully Connected (FC) layers where the first performs feature extraction and learning, while second exhibits feature selection followed by the reduction of spatial resolution of the individual feature map to avoid large number of parameters and computational complexities. To avoid saturation problem in this work, by marking an applied dropout as 0.5. MATLAB based simulation-results with DRISHTI-GS and DRION-DB datasets affirmed that the proposed GlaucoNet model outperforms as compared to other state-of-art techniques: neural network based approaches in terms of accuracy, recall, precision, F-Measure and balanced accuracy. The overall parametric measured values shown better performance for GlaucoNet model.


Author(s):  
Eric Taylor ◽  
Shashank Shekhar ◽  
Graham Taylor

How would you describe the features that a deep learning model composes if you were restricted to measuring observable behaviours? Explainable artificial intelligence (XAI) methods rely on privileged access to model architecture and parameters that is not always feasible for most users, practitioners, and regulators. Inspired by cognitive psychology research on humans, we present a case for measuring response times (RTs) of a forward pass using only the system clock as a technique for XAI. Our method applies to the growing class of models that use input-adaptive dynamic inference and we also extend our approach to standard models that are converted to dynamic inference post hoc. The experimental logic is simple: If the researcher can contrive a stimulus set where variability among input features is tightly controlled, differences in response time for those inputs can be attributed to the way the model composes those features. First, we show that RT is sensitive to difficult, complex features by comparing RTs from ObjectNet and ImageNet. Next, we make specific a priori predictions about RT for abstract features present in the SCEGRAM dataset, where object recognition in humans depends on complex intra-scene object-object relationships. Finally, we show that RT profiles bear specificity for class identity, and therefore the features that define classes. These results cast light on the model’s feature space without opening the black box.


Author(s):  
Canyi Du ◽  
Xinyu Zhang ◽  
Rui Zhong ◽  
Feng Li ◽  
Feifei Yu ◽  
...  

Abstract Aiming at the possible mechanical faults of UAV rotor in the working process, this paper proposes a UAV rotor fault identification method based on interval sampling reconstruction of vibration signals and one-dimensional convolutional neural network (1D-CNN) deep learning. Firstly, experiments were designed to collect the vibration acceleration signals of UAV working at high speed under three states (normal, rotor damage by varying degrees, and rotor crack by different degrees). Then considering the powerful feature extraction and complex data analysis abilities of 1D-CNN, an effective deep learning model for fault identification is established utilizing 1D-CNN. During analysis, it is found that the recognition effect of minor faults is not ideal, which causes by all states were identified as normal and then reduces the overall identification accuracy, when using conventional sequential sampling to construct learning. To this end, in order to make the sample data cover the whole process of data collection as much as possible, a learning sample processing method based on interval sampling reconstruction of vibration signal is proposed. And it is also verified that the sample set reconstructed can easily reflect the global information of mechanical operation. Finally, according to the comparison of analysis results, the recognition rate of deep learning model for different degrees of faults is greatly improved, and minor faults could also be accurately identified, through this method. The results show that, the 1D-CNN deep learning model, could diagnose and identify UAV rotor damage faults accurately, by combing the proposed method of interval sampling reconstruction.


2020 ◽  
Vol 27 (5) ◽  
pp. 359-369 ◽  
Author(s):  
Cheng Shi ◽  
Jiaxing Chen ◽  
Xinyue Kang ◽  
Guiling Zhao ◽  
Xingzhen Lao ◽  
...  

: Protein-related interaction prediction is critical to understanding life processes, biological functions, and mechanisms of drug action. Experimental methods used to determine proteinrelated interactions have always been costly and inefficient. In recent years, advances in biological and medical technology have provided us with explosive biological and physiological data, and deep learning-based algorithms have shown great promise in extracting features and learning patterns from complex data. At present, deep learning in protein research has emerged. In this review, we provide an introductory overview of the deep neural network theory and its unique properties. Mainly focused on the application of this technology in protein-related interactions prediction over the past five years, including protein-protein interactions prediction, protein-RNA\DNA, Protein– drug interactions prediction, and others. Finally, we discuss some of the challenges that deep learning currently faces.


2020 ◽  
Vol 13 (4) ◽  
pp. 627-640 ◽  
Author(s):  
Avinash Chandra Pandey ◽  
Dharmveer Singh Rajpoot

Background: Sentiment analysis is a contextual mining of text which determines viewpoint of users with respect to some sentimental topics commonly present at social networking websites. Twitter is one of the social sites where people express their opinion about any topic in the form of tweets. These tweets can be examined using various sentiment classification methods to find the opinion of users. Traditional sentiment analysis methods use manually extracted features for opinion classification. The manual feature extraction process is a complicated task since it requires predefined sentiment lexicons. On the other hand, deep learning methods automatically extract relevant features from data hence; they provide better performance and richer representation competency than the traditional methods. Objective: The main aim of this paper is to enhance the sentiment classification accuracy and to reduce the computational cost. Method: To achieve the objective, a hybrid deep learning model, based on convolution neural network and bi-directional long-short term memory neural network has been introduced. Results: The proposed sentiment classification method achieves the highest accuracy for the most of the datasets. Further, from the statistical analysis efficacy of the proposed method has been validated. Conclusion: Sentiment classification accuracy can be improved by creating veracious hybrid models. Moreover, performance can also be enhanced by tuning the hyper parameters of deep leaning models.


Sign in / Sign up

Export Citation Format

Share Document