scholarly journals Modeling the Connection between Bank Systemic Risk and Balance-Sheet Liquidity Proxies through Random Forest Regressions

2020 ◽  
Vol 10 (3) ◽  
pp. 52
Author(s):  
Cristina Zeldea

Balance-sheet indicators may reflect, to a great extent, bank fragility. This inherent relationship is the object of theoretical models testing for balance-sheet vulnerabilities. In this sense, we aim to analyze whether systemic risk for a sample of US banks can be explained by a series of balance-sheet variables, considered as proxies for bank liquidity for the 2004:1–2019:1 period. We first compute Marginal Expected Shortfall values for the entities in our sample and then imbed them into a Random Forest regression setup. Although we discover that feature importance is rather bank-specific, we notice that cash and available-for-sale securities are the most relevant factors in explaining the dynamics of systemic risk. Our findings emphasize the need for heightened prudential regulation of bank liquidity, particularly in what concerns cash and immediate liquidity instrument weights. Moreover, systemic risk could be consistently tamed by consolidating bank emergency liquidity provision schemes.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Juhi Gupta ◽  
Smita Kashiramka

Purpose Systemic risk has been a cause of concern for the bank regulatory authorities worldwide since the global financial crisis. This study aims to identify systemically important banks (SIBs) in India by using SRISK to measure the expected capital shortfall of banks in a systemic event. The sample size comprises a balanced data set of 31 listed Indian commercial banks from 2006 to 2019. Design/methodology/approach In this study, the authors have used SRISK to identify banks that have a maximum contribution to the systemic risk of the Indian banking sector. Leverage, size and long-run marginal expected shortfall (LRMES) are used to compute SRISK. Forward-looking LRMES is computed using the GJR-GARCH-dynamic conditional correlation methodology for early prediction of a bank’s contribution to systemic risk. Findings This study finds that public sector banks are more vulnerable to macroeconomic shocks owing to their capital inadequacy vis-à-vis the private sector banks. This study also emphasizes that size should not be used as a standalone factor to assess the systemic importance of a bank. Originality/value Systemic risk has attracted a lot of research interest; however, it is largely limited to the developed nations. This paper fills an important research gap in banking literature about the identification of SIBs in an emerging economy, India. As SRISK uses both balance sheet and market-based information, it can be used to complement the existing methodology used by the Reserve Bank of India to identify SIBs.


2011 ◽  
Vol 49 (2) ◽  
pp. 287-325 ◽  
Author(s):  
Jean Tirole

The recent crisis was characterized by massive illiquidity. This paper reviews what we know and don't know about illiquidity and all its friends: market freezes, fire sales, contagion, and ultimately insolvencies and bailouts. It first explains why liquidity cannot easily be apprehended through a single statistic, and asks whether liquidity should be regulated given that a capital adequacy requirement is already in place. The paper then analyzes market breakdowns due to either adverse selection or shortages of financial muscle, and explains why such breakdowns are endogenous to balance sheet choices and to information acquisition. It then looks at what economics can contribute to the debate on systemic risk and its containment. Finally, the paper takes a macroeconomic perspective, discusses shortages of aggregate liquidity, and analyzes how market value accounting and capital adequacy should react to asset prices. It concludes with a topical form of liquidity provision, monetary bailouts and recapitalizations, and analyzes optimal combinations thereof; it stresses the need for macro-prudential policies. (JEL E44, G01, G21, G28, G32, L51)


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chinmay P. Swami ◽  
Nicholas Lenhard ◽  
Jiyeon Kang

AbstractProsthetic arms can significantly increase the upper limb function of individuals with upper limb loss, however despite the development of various multi-DoF prosthetic arms the rate of prosthesis abandonment is still high. One of the major challenges is to design a multi-DoF controller that has high precision, robustness, and intuitiveness for daily use. The present study demonstrates a novel framework for developing a controller leveraging machine learning algorithms and movement synergies to implement natural control of a 2-DoF prosthetic wrist for activities of daily living (ADL). The data was collected during ADL tasks of ten individuals with a wrist brace emulating the absence of wrist function. Using this data, the neural network classifies the movement and then random forest regression computes the desired velocity of the prosthetic wrist. The models were trained/tested with ADLs where their robustness was tested using cross-validation and holdout data sets. The proposed framework demonstrated high accuracy (F-1 score of 99% for the classifier and Pearson’s correlation of 0.98 for the regression). Additionally, the interpretable nature of random forest regression was used to verify the targeted movement synergies. The present work provides a novel and effective framework to develop an intuitive control for multi-DoF prosthetic devices.


Measurement ◽  
2020 ◽  
pp. 108899
Author(s):  
Madi Keramat-Jahromi ◽  
Seyed Saeid Mohtasebi ◽  
Hossein Mousazadeh ◽  
Mahdi Ghasemi-Varnamkhasri ◽  
Maryam Rahimi-Movassagh

2019 ◽  
Vol 12 (3) ◽  
pp. 1209-1225 ◽  
Author(s):  
Christoph A. Keller ◽  
Mat J. Evans

Abstract. Atmospheric chemistry models are a central tool to study the impact of chemical constituents on the environment, vegetation and human health. These models are numerically intense, and previous attempts to reduce the numerical cost of chemistry solvers have not delivered transformative change. We show here the potential of a machine learning (in this case random forest regression) replacement for the gas-phase chemistry in atmospheric chemistry transport models. Our training data consist of 1 month (July 2013) of output of chemical conditions together with the model physical state, produced from the GEOS-Chem chemistry model v10. From this data set we train random forest regression models to predict the concentration of each transported species after the integrator, based on the physical and chemical conditions before the integrator. The choice of prediction type has a strong impact on the skill of the regression model. We find best results from predicting the change in concentration for long-lived species and the absolute concentration for short-lived species. We also find improvements from a simple implementation of chemical families (NOx = NO + NO2). We then implement the trained random forest predictors back into GEOS-Chem to replace the numerical integrator. The machine-learning-driven GEOS-Chem model compares well to the standard simulation. For ozone (O3), errors from using the random forests (compared to the reference simulation) grow slowly and after 5 days the normalized mean bias (NMB), root mean square error (RMSE) and R2 are 4.2 %, 35 % and 0.9, respectively; after 30 days the errors increase to 13 %, 67 % and 0.75, respectively. The biases become largest in remote areas such as the tropical Pacific where errors in the chemistry can accumulate with little balancing influence from emissions or deposition. Over polluted regions the model error is less than 10 % and has significant fidelity in following the time series of the full model. Modelled NOx shows similar features, with the most significant errors occurring in remote locations far from recent emissions. For other species such as inorganic bromine species and short-lived nitrogen species, errors become large, with NMB, RMSE and R2 reaching >2100 % >400 % and <0.1, respectively. This proof-of-concept implementation takes 1.8 times more time than the direct integration of the differential equations, but optimization and software engineering should allow substantial increases in speed. We discuss potential improvements in the implementation, some of its advantages from both a software and hardware perspective, its limitations, and its applicability to operational air quality activities.


Sign in / Sign up

Export Citation Format

Share Document