scholarly journals An Approach for Variable Selection and Prediction Model for Estimating the Risk-Based Capital (RBC) Based on Machine Learning Algorithms

Risks ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 13
Author(s):  
Jaewon Park ◽  
Minsoo Shin

The risk-based capital (RBC) ratio, an insurance company’s financial soundness system, evaluates the capital adequacy needed to withstand unexpected losses. Therefore, continuous institutional improvement has been made to monitor the financial solvency of companies and protect consumers’ rights, and improvement of solvency systems has been researched. The primary purpose of this study is to find a set of important predictors to estimate the RBC ratio of life insurance companies in a large number of variables (1891), which includes crucial finance and management indices collected from all Korean insurers quarterly under regulation for transparent management information. This study employs a combination of Machine learning techniques: Random Forest algorithms and the Bayesian Regulatory Neural Network (BRNN). The combination of Random Forest algorithms and BRNN predicts the next period’s RBC ratio better than the conventional statistical method, which uses ordinary least-squares regression (OLS). As a result of the findings from Machine learning techniques, a set of important predictors is found within three categories: liabilities and expenses, other financial predictors, and predictors from business performance. The dataset of 23 companies with 1891 variables was used in this study from March 2008 to December 2018 with quarterly updates for each year.

Risks ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 32
Author(s):  
Jaewon Park ◽  
Minsoo Shin ◽  
Wookjae Heo

The purpose of this study is to find the most important variables that represent the future projections of the Bank of International Settlements’ (BIS) capital adequacy ratio, which is the index of financial soundness in a bank as a comprehensive and important measure of capital adequacy. This study analyzed the past 12 years of data from all domestic banks in South Korea. The research data include all financial information, such as key operating indicators, major business activities, and general information of the financial supervisory service of South Korea from 2008 to 2019. In this study, machine learning techniques, Random Forest Boruta algorithms, Random Forest Recursive Feature Elimination, and Bayesian Regularization Neural Networks (BRNN) were utilized. Among 1929 variables, this study found 38 most important variables for representing the BIS capital adequacy ratio. An additional comparison was executed to confirm the statistical validity of future prediction performance between BRNN and ordinary least squares (OLS) models. BRNN predicted the BIS capital adequacy ratio more robustly and accurately than the OLS models. We believe our findings would appeal to the readership of your journal such as the policymakers, managers and practitioners in the bank-related fields because this study highlights the key findings from the data-driven approaches using machine learning techniques.


Author(s):  
Helper Zhou ◽  
Victor Gumbo

The emergence of machine learning algorithms presents the opportunity for a variety of stakeholders to perform advanced predictive analytics and to make informed decisions. However, to date there have been few studies in developing countries that evaluate the performance of such algorithms—with the result that pertinent stakeholders lack an informed basis for selecting appropriate techniques for modelling tasks. This study aims to address this gap by evaluating the performance of three machine learning techniques: ordinary least squares (OLS), least absolute shrinkage and selection operator (LASSO), and artificial neural networks (ANNs). These techniques are evaluated in respect of their ability to perform predictive modelling of the sales performance of small, medium and micro enterprises (SMMEs) engaged in manufacturing. The evaluation finds that the ANNs algorithm’s performance is far superior to that of the other two techniques, OLS and LASSO, in predicting the SMMEs’ sales performance.


2021 ◽  
Author(s):  
Rakesh Kumar Saroj ◽  
Pawan Kumar Yadav ◽  
Rajneesh Singh ◽  
Obvious Nchimunya Chilyabanyama

Abstract Background: The death rate of under-five children in India declined last few decades, but few bigger states have poor performance. This is a matter of serious concern for the child's health as well as social development. Nowadays, machine learning techniques play a crucial role in the smart health care system to capture the hidden factors and patterns of outcomes. In this paper, we used machine learning techniques to predict the important factors of under-five mortality.This study aims to explore the importance of machine learning techniques to predict under-five mortality and to find the important factors that cause under-five mortality.The data was taken from the National Family Health Survey-IV of Uttar Pradesh. We used four machine learning techniques like decision tree, support vector machine, random forest, and logistic regression to predict under-five mortality factors and model accuracy of each model. We have also used information gain to rank to know the important variables for accurate predictions in under-five mortality data.Result: Random Forest (RF) predicts the child mortality factors with the highest accuracy of 97.5 %, and the number of living children, births in the last five years, educational level, birth order, total children ever born, currently breastfeeding, and size of child at birth that identifying as essential factors for under-five mortality.Conclusion: The study focuses on machine learning techniques to predict and identify important factors for under-five mortality. The random forest model provides an excellent predictive result for estimating the risk factors of under-five mortality. Based on the resulting outcome, policymakers can make policies and plans to reduce under-five mortality.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kumash Kapadia ◽  
Hussein Abdel-Jaber ◽  
Fadi Thabtah ◽  
Wael Hadi

Indian Premier League (IPL) is one of the more popular cricket world tournaments, and its financial is increasing each season, its viewership has increased markedly and the betting market for IPL is growing significantly every year. With cricket being a very dynamic game, bettors and bookies are incentivised to bet on the match results because it is a game that changes ball-by-ball. This paper investigates machine learning technology to deal with the problem of predicting cricket match results based on historical match data of the IPL. Influential features of the dataset have been identified using filter-based methods including Correlation-based Feature Selection, Information Gain (IG), ReliefF and Wrapper. More importantly, machine learning techniques including Naïve Bayes, Random Forest, K-Nearest Neighbour (KNN) and Model Trees (classification via regression) have been adopted to generate predictive models from distinctive feature sets derived by the filter-based methods. Two featured subsets were formulated, one based on home team advantage and other based on Toss decision. Selected machine learning techniques were applied on both feature sets to determine a predictive model. Experimental tests show that tree-based models particularly Random Forest performed better in terms of accuracy, precision and recall metrics when compared to probabilistic and statistical models. However, on the Toss featured subset, none of the considered machine learning algorithms performed well in producing accurate predictive models.


Energies ◽  
2020 ◽  
Vol 13 (10) ◽  
pp. 2570
Author(s):  
Christil Pasion ◽  
Torrey Wagner ◽  
Clay Koschnick ◽  
Steven Schuldt ◽  
Jada Williams ◽  
...  

Solar energy is a key renewable energy source; however, its intermittent nature and potential for use in distributed systems make power prediction an important aspect of grid integration. This research analyzed a variety of machine learning techniques to predict power output for horizontal solar panels using 14 months of data collected from 12 northern-hemisphere locations. We performed our data collection and analysis in the absence of irradiation data—an approach not commonly found in prior literature. Using latitude, month, hour, ambient temperature, pressure, humidity, wind speed, and cloud ceiling as independent variables, a distributed random forest regression algorithm modeled the combined dataset with an R2 value of 0.94. As a comparative measure, other machine learning algorithms resulted in R2 values of 0.50–0.94. Additionally, the data from each location was modeled separately with R2 values ranging from 0.91 to 0.97, indicating a range of consistency across all sites. Using an input variable permutation approach with the random forest algorithm, we found that the three most important variables for power prediction were ambient temperature, humidity, and cloud ceiling. The analysis showed that machine learning potentially allowed for accurate power prediction while avoiding the challenges associated with modeled irradiation data.


Real time crash predictor system is determining frequency of crashes and also severity of crashes. Nowadays machine learning based methods are used to predict the total number of crashes. In this project, prediction accuracy of machine learning algorithms like Decision tree (DT), K-nearest neighbors (KNN), Random forest (RF), Logistic Regression (LR) are evaluated. Performance analysis of these classification methods are evaluated in terms of accuracy. Dataset included for this project is obtained from 49 states of US and 27 states of India which contains 2.25 million US accident crash records and 1.16 million crash records respectively. Results prove that classification accuracy obtained from Random Forest (RF) is96% compared to other classification methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Ali Soleymani ◽  
Fatemeh Arabgol

In today’s security landscape, advanced threats are becoming increasingly difficult to detect as the pattern of attacks expands. Classical approaches that rely heavily on static matching, such as blacklisting or regular expression patterns, may be limited in flexibility or uncertainty in detecting malicious data in system data. This is where machine learning techniques can show their value and provide new insights and higher detection rates. The behavior of botnets that use domain-flux techniques to hide command and control channels was investigated in this research. The machine learning algorithm and text mining used to analyze the network DNS protocol and identify botnets were also described. For this purpose, extracted and labeled domain name datasets containing healthy and infected DGA botnet data were used. Data preprocessing techniques based on a text-mining approach were applied to explore domain name strings with n-gram analysis and PCA. Its performance is improved by extracting statistical features by principal component analysis. The performance of the proposed model has been evaluated using different classifiers of machine learning algorithms such as decision tree, support vector machine, random forest, and logistic regression. Experimental results show that the random forest algorithm can be used effectively in botnet detection and has the best botnet detection accuracy.


Author(s):  
Zulqarnain Khokhar ◽  
◽  
Murtaza Ahmed Siddiqi ◽  

Wi-Fi based indoor positioning with the help of access points and smart devices have become an integral part in finding a device or a person’s location. Wi-Fi based indoor localization technology has been among the most attractive field for researchers for a number of years. In this paper, we have presented Wi-Fi based in-door localization using three different machine-learning techniques. The three machine learning algorithms implemented and compared are Decision Tree, Random Forest and Gradient Boosting classifier. After making a fingerprint of the floor based on Wi-Fi signals, mentioned algorithms were used to identify device location at thirty different positions on the floor. Random Forest and Gradient Boosting classifier were able to identify the location of the device with accuracy higher than 90%. While Decision Tree was able to identify the location with accuracy a bit higher than 80%.


2020 ◽  
Vol 50 (3) ◽  
pp. 853-871
Author(s):  
Hyukjun Gweon ◽  
Shu Li ◽  
Rogemar Mamon

AbstractTo evaluate a large portfolio of variable annuity (VA) contracts, many insurance companies rely on Monte Carlo simulation, which is computationally intensive. To address this computational challenge, machine learning techniques have been adopted in recent years to estimate the fair market values (FMVs) of a large number of contracts. It is shown that bootstrapped aggregation (bagging), one of the most popular machine learning algorithms, performs well in valuing VA contracts using related attributes. In this article, we highlight the presence of prediction bias of bagging and use the bias-corrected (BC) bagging approach to reduce the bias and thus improve the predictive performance. Experimental results demonstrate the effectiveness of BC bagging as compared with bagging, boosting, and model points in terms of prediction accuracy.


2021 ◽  
Author(s):  
Randa Natras ◽  
Michael Schmidt

<p>The accuracy and reliability of Global Navigation Satellite System (GNSS) applications are affected by the state of the Earth‘s ionosphere, especially when using single frequency observations, which are employed mostly in mass-market GNSS receivers. In addition, space weather can be the cause of strong sudden disturbances in the ionosphere, representing a major risk for GNSS performance and reliability. Accurate corrections of ionospheric effects and early warning information in the presence of space weather are therefore crucial for GNSS applications. This correction information can be obtained by employing a model that describes the complex relation of space weather processes with the non-linear spatial and temporal variability of the Vertical Total Electron Content (VTEC) within the ionosphere and includes a forecast component considering space weather events to provide an early warning system. To develop such a model is challenging but an important task and of high interest for the GNSS community.</p><p>To model the impact of space weather, a complex chain of physical dynamical processes between the Sun, the interplanetary magnetic field, the Earth's magnetic field and the ionosphere need to be taken into account. Machine learning techniques are suitable in finding patterns and relationships from historical data to solve problems that are too complex for a traditional approach requiring an extensive set of rules (equations) or for which there is no acceptable solution available yet.</p><p>The main objective of this study is to develop a model for forecasting the ionospheric VTEC taking into account physical processes and utilizing state-of-art machine learning techniques to learn complex non-linear relationships from the data. In this work, supervised learning is applied to forecast VTEC. This means that the model is provided by a set of (input) variables that have some influence on the VTEC forecast (output). To be more specific, data of solar activity, solar wind, interplanetary and geomagnetic field and other information connected to the VTEC variability are used as input to predict VTEC values in the future. Different machine learning algorithms are applied, such as decision tree regression, random forest regression and gradient boosting. The decision trees are the simplest and easiest to interpret machine learning algorithms, but the forecasted VTEC lacks smoothness. On the other hand, random forest and gradient boosting use a combination of multiple regression trees, which lead to improvements in the prediction accuracy and smoothness. However, the results show that the overall performance of the algorithms, measured by the root mean square error, does not differ much from each other and improves when the data are well prepared, i.e. cleaned and transformed to remove trends. Preliminary results of this study will be presented including the methodology, goals, challenges and perspectives of developing the machine learning model.</p>


Sign in / Sign up

Export Citation Format

Share Document