Management and monitoring of the displaced commercial risk: a prescriptive approach

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Othmane Touri ◽  
Rida Ahroum ◽  
Boujemâa Achchab

Purpose The displaced commercial risk is one of the specific risks in the Islamic finance that creates a serious debate among practitioners and researchers about its management. The purpose of this paper is to assess a new approach to manage this risk using machine learning algorithms. Design/methodology/approach To attempt this purpose, the authors use several machine learning algorithms applied to a set of financial data related to banks from different regions and consider the deposit variation intensity as an indicator. Findings Results show acceptable prediction accuracy. The model could be used to optimize the prudential reserves for banks and the incomes distributed to depositors. Research limitations/implications However, the model uses several variables as proxies since data are not available for some specific indicators, such as the profit equalization reserves and the investment risk reserves. Originality/value Previous studies have analyzed the origin and impact of DCR. To the best of authors’ knowledge, none of them has provided an ex ante management tool for this risk. Furthermore, the authors suggest the use of a new approach based on machine learning algorithms.

2022 ◽  
Vol 301 ◽  
pp. 113868
Author(s):  
Xuan Cuong Nguyen ◽  
Thi Thanh Huyen Nguyen ◽  
Quyet V. Le ◽  
Phuoc Cuong Le ◽  
Arun Lal Srivastav ◽  
...  

2017 ◽  
Vol 117 (5) ◽  
pp. 927-945 ◽  
Author(s):  
Taehoon Ko ◽  
Je Hyuk Lee ◽  
Hyunchang Cho ◽  
Sungzoon Cho ◽  
Wounjoo Lee ◽  
...  

Purpose Quality management of products is an important part of manufacturing process. One way to manage and assure product quality is to use machine learning algorithms based on relationship among various process steps. The purpose of this paper is to integrate manufacturing, inspection and after-sales service data to make full use of machine learning algorithms for estimating the products’ quality in a supervised fashion. Proposed frameworks and methods are applied to actual data associated with heavy machinery engines. Design/methodology/approach By following Lenzerini’s formula, manufacturing, inspection and after-sales service data from various sources are integrated. The after-sales service data are used to label each engine as normal or abnormal. In this study, one-class classification algorithms are used due to class imbalance problem. To address multi-dimensionality of time series data, the symbolic aggregate approximation algorithm is used for data segmentation. Then, binary genetic algorithm-based wrapper approach is applied to segmented data to find the optimal feature subset. Findings By employing machine learning-based anomaly detection models, an anomaly score for each engine is calculated. Experimental results show that the proposed method can detect defective engines with a high probability before they are shipped. Originality/value Through data integration, the actual customer-perceived quality from after-sales service is linked to data from manufacturing and inspection process. In terms of business application, data integration and machine learning-based anomaly detection can help manufacturers establish quality management policies that reflect the actual customer-perceived quality by predicting defective engines.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mona Bokharaei Nia ◽  
Mohammadali Afshar Kazemi ◽  
Changiz Valmohammadi ◽  
Ghanbar Abbaspour

PurposeThe increase in the number of healthcare wearable (Internet of Things) IoT options is making it difficult for individuals, healthcare experts and physicians to find the right smart device that best matches their requirements or treatments. The purpose of this research is to propose a framework for a recommender system to advise on the best device for the patient using machine learning algorithms and social media sentiment analysis. This approach will provide great value for patients, doctors, medical centers, and hospitals to enable them to provide the best advice and guidance in allocating the device for that particular time in the treatment process.Design/methodology/approachThis data-driven approach comprises multiple stages that lead to classifying the diseases that a patient is currently facing or is at risk of facing by using and comparing the results of various machine learning algorithms. Hereupon, the proposed recommender framework aggregates the specifications of wearable IoT devices along with the image of the wearable product, which is the extracted user perception shared on social media after applying sentiment analysis. Lastly, a proposed computation with the use of a genetic algorithm was used to compute all the collected data and to recommend the wearable IoT device recommendation for a patient.FindingsThe proposed conceptual framework illustrates how health record data, diseases, wearable devices, social media sentiment analysis and machine learning algorithms are interrelated to recommend the relevant wearable IoT devices for each patient. With the consultation of 15 physicians, each a specialist in their area, the proof-of-concept implementation result shows an accuracy rate of up to 95% using 17 settings of machine learning algorithms over multiple disease-detection stages. Social media sentiment analysis was computed at 76% accuracy. To reach the final optimized result for each patient, the proposed formula using a Genetic Algorithm has been tested and its results presented.Research limitations/implicationsThe research data were limited to recommendations for the best wearable devices for five types of patient diseases. The authors could not compare the results of this research with other studies because of the novelty of the proposed framework and, as such, the lack of available relevant research.Practical implicationsThe emerging trend of wearable IoT devices is having a significant impact on the lifestyle of people. The interest in healthcare and well-being is a major driver of this growth. This framework can help in accelerating the transformation of smart hospitals and can assist doctors in finding and suggesting the right wearable IoT for their patients smartly and efficiently during treatment for various diseases. Furthermore, wearable device manufacturers can also use the outcome of the proposed platform to develop personalized wearable devices for patients in the future.Originality/valueIn this study, by considering patient health, disease-detection algorithm, wearable and IoT social media sentiment analysis, and healthcare wearable device dataset, we were able to propose and test a framework for the intelligent recommendation of wearable and IoT devices helping healthcare professionals and patients find wearable devices with a better understanding of their demands and experiences.


2020 ◽  
Vol 8 (6) ◽  
pp. 1964-1968

Drug reviews are commonly used in pharmaceutical industry to improve the medications given to patients. Generally, drug review contains details of drug name, usage, ratings and comments by the patients. However, these reviews are not clean, and there is a need to improve the cleanness of the review so that they can be benefited for both pharmacists and patients. To do this, we propose a new approach that includes different steps. First, we add extra parameters in the review data by applying VADER sentimental analysis to clean the review data. Then, we apply different machine learning algorithms, namely linear SVC, logistic regression, SVM, random forest, and Naive Bayes on the drug review specify dataset names. However, we found that the accuracy of these algorithms for these datasets is limited. To improve this, we apply stratified K-fold algorithm in combination with Logistic regression. With this approach, the accuracy is increased to 96%.


2020 ◽  
Author(s):  
David Goretzko ◽  
Markus Bühner

Determining the number of factors is one of the most crucial decisions a researcher has to face when conducting an exploratory factor analysis. As no common factor retention criterion can be seen as generally superior, a new approach is proposed - combining extensive data simulation with state-of-the-art machine learning algorithms. First, data was simulated under a broad range of realistic conditions and three algorithms were trained using specially designed features based on the correlation matrices of the simulated data sets. Subsequently, the new approach was compared to four common factor retention criteria with regard to its accuracy in determining the correct number of factors in a large-scale simulation experiment. Sample size, variables per factor, correlations between factors, primary and cross-loadings as well as the correct number of factors were varied to gain comprehensive knowledge of the efficiency of our new method. A gradient boosting model outperformed all other criteria, so in a second step, we improved this model by tuning several hyperparameters of the algorithm and using common retention criteria as additional features. This model reached an out-of-sample accuracy of 99.3% (the pre-trained model can be obtained from https://osf.io/mvrau/). A great advantage of this approach is the possibility to continuously extend the data basis (e.g. using ordinal data) as well as the set of features to improve the predictive performance and to increase generalizability.


2020 ◽  
Vol 16 (1) ◽  
pp. 43-52
Author(s):  
Ryoko Suzuki ◽  
Jun Katada ◽  
Sreeram Ramagopalan ◽  
Laura McDonald

Aim: Nonvalvular atrial fibrillation (NVAF) is associated with an increased risk of stroke however many patients are diagnosed after onset. This study assessed the potential of machine-learning algorithms to detect NVAF. Materials & methods: A retrospective database study using a Japanese claims database. Patients with and without NVAF were selected. 41 variables were included in different classification algorithms. Results: Machine learning algorithms identified NVAF with an area under the curve of >0.86; corresponding sensitivity/specificity was also high. The stacking model which combined multiple algorithms outperformed single-model approaches (area under the curve ≥0.90, sensitivity/specificity ≥0.80/0.82), although differences were small. Conclusion: Machine-learning based algorithms can detect atrial fibrillation with accuracy. Although additional validation is needed, this methodology could encourage a new approach to detect NVAF.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kushalkumar Thakkar ◽  
Suhas Suresh Ambekar ◽  
Manoj Hudnurkar

Purpose Longitudinal facial cracks (LFC) are one of the major defects occurring in the continuous-casting stage of thin slab caster using funnel molds. Longitudinal cracks occur mainly owing to non-uniform cooling, varying thermal conductivity along mold length and use of high superheat during casting, improper casting powder characteristics. These defects are difficult to capture and are visible only in the final stages of a process or even at the customer end. Besides, there is a seasonality associated with this defect where defect intensity increases during the winter season. To address the issue, a model-based on data analytics is developed. Design/methodology/approach Around six-month data of steel manufacturing process is taken and around 60 data collection point is analyzed. The model uses different classification machine learning algorithms such as logistic regression, decision tree, ensemble methods of a decision tree, support vector machine and Naïve Bays (for different cut off level) to investigate data. Findings Proposed research framework shows that most of models give good results between cut off level 0.6–0.8 and random forest, gradient boosting for decision trees and support vector machine model performs better compared to other model. Practical implications Based on predictions of model steel manufacturing companies can identify the optimal operating range where this defect can be reduced. Originality/value An analytical approach to identify LFC defects provides objective models for reduction of LFC defects. By reducing LFC defects, quality of steel can be improved.


2020 ◽  
Vol 38 (3) ◽  
pp. 213-225 ◽  
Author(s):  
Agostino Valier

PurposeIn the literature there are numerous tests that compare the accuracy of automated valuation models (AVMs). These models first train themselves with price data and property characteristics, then they are tested by measuring their ability to predict prices. Most of them compare the effectiveness of traditional econometric models against the use of machine learning algorithms. Although the latter seem to offer better performance, there is not yet a complete survey of the literature to confirm the hypothesis.Design/methodology/approachAll tests comparing regression analysis and AVMs machine learning on the same data set have been identified. The scores obtained in terms of accuracy were then compared with each other.FindingsMachine learning models are more accurate than traditional regression analysis in their ability to predict value. Nevertheless, many authors point out as their limit their black box nature and their poor inferential abilities.Practical implicationsAVMs machine learning offers a huge advantage for all real estate operators who know and can use them. Their use in public policy or litigation can be critical.Originality/valueAccording to the author, this is the first systematic review that collects all the articles produced on the subject done comparing the results obtained.


Sign in / Sign up

Export Citation Format

Share Document