Digital inclusive finance risk prevention based on machine learning and neural network algorithms

2021 ◽  
pp. 1-11
Author(s):  
Yangyang Hao

To improve the effectiveness of digital inclusive finance risk prevention, this paper constructs a digital inclusive finance risk prevention system based on machine learning and neural network algorithms and performs special data pre-processing for convolutional neural networks. Moreover, based on the index data arrangement with the shortest double Euclidean distance, this paper uses the principle of combining qualitative analysis and quantitative analysis to process the data set. Aiming at the characteristics of different links in the whole process of digital inclusive finance, this paper has formed a preliminary digital inclusive finance risk factor assessment of influencing factors. Besides, this paper combines the needs of digital inclusive finance risk methods to construct a digital inclusive finance risk prevention model, and design functional modules based on process analysis. Finally, this paper designs experiments to verify the performance of the digital inclusive finance risk prevention model constructed in this paper. The experimental research results show that the model constructed in this paper has a certain effect.

2019 ◽  
Author(s):  
Longxiang Su ◽  
Chun Liu ◽  
Dongkai Li ◽  
Jie He ◽  
Fanglan Zheng ◽  
...  

BACKGROUND Heparin is one of the most commonly used medications in intensive care units. In clinical practice, the use of a weight-based heparin dosing nomogram is standard practice for the treatment of thrombosis. Recently, machine learning techniques have dramatically improved the ability of computers to provide clinical decision support and have allowed for the possibility of computer generated, algorithm-based heparin dosing recommendations. OBJECTIVE The objective of this study was to predict the effects of heparin treatment using machine learning methods to optimize heparin dosing in intensive care units based on the predictions. Patient state predictions were based upon activated partial thromboplastin time in 3 different ranges: subtherapeutic, normal therapeutic, and supratherapeutic, respectively. METHODS Retrospective data from 2 intensive care unit research databases (Multiparameter Intelligent Monitoring in Intensive Care III, MIMIC-III; e–Intensive Care Unit Collaborative Research Database, eICU) were used for the analysis. Candidate machine learning models (random forest, support vector machine, adaptive boosting, extreme gradient boosting, and shallow neural network) were compared in 3 patient groups to evaluate the classification performance for predicting the subtherapeutic, normal therapeutic, and supratherapeutic patient states. The model results were evaluated using precision, recall, F1 score, and accuracy. RESULTS Data from the MIMIC-III database (n=2789 patients) and from the eICU database (n=575 patients) were used. In 3-class classification, the shallow neural network algorithm performed the best (F1 scores of 87.26%, 85.98%, and 87.55% for data set 1, 2, and 3, respectively). The shallow neural network algorithm achieved the highest F1 scores within the patient therapeutic state groups: subtherapeutic (data set 1: 79.35%; data set 2: 83.67%; data set 3: 83.33%), normal therapeutic (data set 1: 93.15%; data set 2: 87.76%; data set 3: 84.62%), and supratherapeutic (data set 1: 88.00%; data set 2: 86.54%; data set 3: 95.45%) therapeutic ranges, respectively. CONCLUSIONS The most appropriate model for predicting the effects of heparin treatment was found by comparing multiple machine learning models and can be used to further guide optimal heparin dosing. Using multicenter intensive care unit data, our study demonstrates the feasibility of predicting the outcomes of heparin treatment using data-driven methods, and thus, how machine learning–based models can be used to optimize and personalize heparin dosing to improve patient safety. Manual analysis and validation suggested that the model outperformed standard practice heparin treatment dosing.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7709
Author(s):  
Serena Cerfoglio ◽  
Manuela Galli ◽  
Marco Tarabini ◽  
Filippo Bertozzi ◽  
Chiarella Sforza ◽  
...  

Nowadays, the use of wearable inertial-based systems together with machine learning methods opens new pathways to assess athletes’ performance. In this paper, we developed a neural network-based approach for the estimation of the Ground Reaction Forces (GRFs) and the three-dimensional knee joint moments during the first landing phase of the Vertical Drop Jump. Data were simultaneously recorded from three commercial inertial units and an optoelectronic system during the execution of 112 jumps performed by 11 healthy participants. Data were processed and sorted to obtain a time-matched dataset, and a non-linear autoregressive with external input neural network was implemented in Matlab. The network was trained through a train-test split technique, and performance was evaluated in terms of Root Mean Square Error (RMSE). The network was able to estimate the time course of GRFs and joint moments with a mean RMSE of 0.02 N/kg and 0.04 N·m/kg, respectively. Despite the comparatively restricted data set and slight boundary errors, the results supported the use of the developed method to estimate joint kinetics, opening a new perspective for the development of an in-field analysis method.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA41-WA52 ◽  
Author(s):  
Dario Grana ◽  
Leonardo Azevedo ◽  
Mingliang Liu

Among the large variety of mathematical and computational methods for estimating reservoir properties such as facies and petrophysical variables from geophysical data, deep machine-learning algorithms have gained significant popularity for their ability to obtain accurate solutions for geophysical inverse problems in which the physical models are partially unknown. Solutions of classification and inversion problems are generally not unique, and uncertainty quantification studies are required to quantify the uncertainty in the model predictions and determine the precision of the results. Probabilistic methods, such as Monte Carlo approaches, provide a reliable approach for capturing the variability of the set of possible models that match the measured data. Here, we focused on the classification of facies from seismic data and benchmarked the performance of three different algorithms: recurrent neural network, Monte Carlo acceptance/rejection sampling, and Markov chain Monte Carlo. We tested and validated these approaches at the well locations by comparing classification predictions to the reference facies profile. The accuracy of the classification results is defined as the mismatch between the predictions and the log facies profile. Our study found that when the training data set of the neural network is large enough and the prior information about the transition probabilities of the facies in the Monte Carlo approach is not informative, machine-learning methods lead to more accurate solutions; however, the uncertainty of the solution might be underestimated. When some prior knowledge of the facies model is available, for example, from nearby wells, Monte Carlo methods provide solutions with similar accuracy to the neural network and allow a more robust quantification of the uncertainty, of the solution.


2019 ◽  
Vol 8 (6) ◽  
pp. 799 ◽  
Author(s):  
Cheng-Shyuan Rau ◽  
Shao-Chun Wu ◽  
Jung-Fang Chuang ◽  
Chun-Ying Huang ◽  
Hang-Tsung Liu ◽  
...  

Background: We aimed to build a model using machine learning for the prediction of survival in trauma patients and compared these model predictions to those predicted by the most commonly used algorithm, the Trauma and Injury Severity Score (TRISS). Methods: Enrolled hospitalized trauma patients from 2009 to 2016 were divided into a training dataset (70% of the original data set) for generation of a plausible model under supervised classification, and a test dataset (30% of the original data set) to test the performance of the model. The training and test datasets comprised 13,208 (12,871 survival and 337 mortality) and 5603 (5473 survival and 130 mortality) patients, respectively. With the provision of additional information such as pre-existing comorbidity status or laboratory data, logistic regression (LR), support vector machine (SVM), and neural network (NN) (with the Stuttgart Neural Network Simulator (RSNNS)) were used to build models of survival prediction and compared to the predictive performance of TRISS. Predictive performance was evaluated by accuracy, sensitivity, and specificity, as well as by area under the curve (AUC) measures of receiver operating characteristic curves. Results: In the validation dataset, NN and the TRISS presented the highest score (82.0%) for balanced accuracy, followed by SVM (75.2%) and LR (71.8%) models. In the test dataset, NN had the highest balanced accuracy (75.1%), followed by the TRISS (70.2%), SVM (70.6%), and LR (68.9%) models. All four models (LR, SVM, NN, and TRISS) exhibited a high accuracy of more than 97.5% and a sensitivity of more than 98.6%. However, NN exhibited the highest specificity (51.5%), followed by the TRISS (41.5%), SVM (40.8%), and LR (38.5%) models. Conclusions: These four models (LR, SVM, NN, and TRISS) exhibited a similar high accuracy and sensitivity in predicting the survival of the trauma patients. In the test dataset, the NN model had the highest balanced accuracy and predictive specificity.


2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


Author(s):  
Christoph Böhm ◽  
Jan H. Schween ◽  
Mark Reyers ◽  
Benedikt Maier ◽  
Ulrich Löhnert ◽  
...  

AbstractIn many hyper-arid ecosystems, such as the Atacama Desert, fog is the most important fresh water source. To study biological and geological processes in such water-limited regions, knowledge about the spatio-temporal distribution and variability of fog presence is necessary. In this study, in-situ measurements provided by a network of climate stations equipped, inter alia, with leaf wetness sensors are utilized to create a reference fog data set which enables the validation of satellite-based fog retrieval methods. Further, a new satellite-based fog detection approach is introduced which uses brightness temperatures measured by the Moderate Resolution Imaging Spectroradiometer (MODIS) as input for a neural network. Such a machine learning technique can exploit all spectral information of the satellite data and represent potential non-linear relationships. Compared to a second fog detection approach based on MODIS cloud top height retrievals, the neural network reaches a higher detection skill (Heidke skill score of 0.56 compared to 0.49). A suitable representation of temporal variability on subseasonal time scales is provided with correlations mostly greater than 0.7 between fog occurrence time series derived from the neural network and the reference data for individual climate stations, respectively. Furthermore, a suitable spatial representativity of the neural network approach to expand the application to the whole region is indicated. Three-year averages of fog frequencies reveal similar spatial patterns for the austral winter season for both approaches. However, differences are found for the summer and potential reasons are discussed.


in modeling of complex systems, manual creation and maintenance of the appropriate behavior is found to be the key problem. Behavior modeling using machine learning has found successful in modeling and simulation. This paper presents artificial neural network (ANN) modeling of transmission line carrying frequency varying signal using machine learning. This work uses proper orthogonal decomposition (POD) based reduced order modeling. In this proposed work, snapshot sets of complex mathematical model of nonlinear transmission line and also linear model are obtained at different time interval. These snapshot sets are arranged in matrix form separately for nonlinear and linear models. POD method is applied on both the matrices separately. This reduces the order of the matrix which is used as input and output data set for neural network training through machine learning technique. Trained neural network model has been verified using different untrained data set. The proposed algorithm determines the dimension of the interpolation space prompting a considerable decrease in the computational expense. The proposed algorithm doesn't force any imperatives on the topology of the appropriate circuit or kind of the nonlinear segments and hence relevant to general nonlinear systems.


Images are the fastest growing content, they contribute significantly to the amount of data generated on the internet every day. Image classification is a challenging problem that social media companies work on vigorously to enhance the user’s experience with the interface. The recent advances in the field of machine learning and computer vision enables personalized suggestions and automatic tagging of images. Convolutional neural network is a hot research topic these days in the field of machine learning. With the help of immensely dense labelled data available on the internet the networks can be trained to recognize the differentiating features among images under the same label. New neural network algorithms are developed frequently that outperform the state-of-art machine learning algorithms. Recent algorithms have managed to produce error rates as low as 3.1%. In this paper the architecture of important CNN algorithms that have gained attention are discussed, analyzed and compared and the concept of transfer learning is used to classify different breeds of dogs..


2017 ◽  
Author(s):  
Luís Dias ◽  
Rosalvo Neto

Google released on November of 2015 Tensorflow, an open source machine learning framework that can be used to implement Deep Neural Network algorithms, a class of algorithms that shows great potential in solving complex problems. Considering the importance of usability in software success, this research aims to perform a usability analysis on Tensorflow and to compare it with another widely used framework, R. The evaluation was performed through usability tests with university students. The study led do indications that Tensorflow usability is equal or better than the usability of traditional frameworks used by the scientific community.


2019 ◽  
Vol 12 (3) ◽  
pp. 26-35
Author(s):  
Ali Sharifi

Introduction: Breast cancer is the most prevalent cause of cancer mortality among women. Early diagnosis of breast cancer gives patients greater survival time. The present study aims to provide an algorithm for more accurate prediction and more effective decision-making in the treatment of patients with breast cancer. Methods: The present study was applied, descriptive-analytical, based on the use of computerized methods. We obtained 699 independent records containing nine clinical variables from the UCI machine learning. The EM algorithm was used to analyze the data before normalizing them. Following that, a combination of neural network model based on multilayer perceptron structure with the Whale Optimization Algorithm (WOA) was used to predict the breast tumor malignancy. Results: After preprocessing the disease data set and reducing data dimensions, the accuracy of the proposed algorithm for training and testing data was 99.6% and 99%, respectively. The prediction accuracy of the proposed model was 99.4%, which would be a satisfying result compared to different methods of machine learning in other studies. Conclusion: Considering the importance of early diagnosis of breast cancer, the results of this study may have highly useful implications for health care providers and planners so as to achieve the early diagnosis of the disease.


Sign in / Sign up

Export Citation Format

Share Document