scholarly journals Bayesian stochastic modeling of a spherical rock bouncing on a coarse soil

2009 ◽  
Vol 9 (3) ◽  
pp. 831-846 ◽  
Author(s):  
F. Bourrier ◽  
N. Eckert ◽  
F. Nicot ◽  
F. Darve

Abstract. Trajectory analysis models are increasingly used for rockfall hazard mapping. However, classical approaches only partially account for the variability of the trajectories. In this paper, a general formulation using a Taylor series expansion is proposed for the quantification of the relative importance of the different processes that explain the variability of the reflected velocity vector after bouncing. A stochastic bouncing model is obtained using a statistical analysis of a large numerical data set. Estimation is performed using hierarchical Bayesian modeling schemes. The model introduces information on the coupling of the reflected and incident velocity vectors, which satisfactorily expresses the mechanisms associated with boulder bouncing. The approach proposed is detailed in the case of the impact of a spherical boulder on a coarse soil, with special focus on the influence of soil particles' geometrical configuration near the impact point and kinematic parameters of the rock before bouncing. The results show that a first-order expansion is sufficient for the case studied and emphasize the predominant role of the local soil properties on the reflected velocity vector's variability. The proposed model is compared with classical approaches and the interest for rockfall hazard assessment of reliable stochastic bouncing models in trajectory simulations is illustrated with a simple case study.

Author(s):  
Bao-Linh Tran ◽  
Chi-Chung Chen ◽  
Wei-Chun Tseng ◽  
Shu-Yi Liao

This study examines how experience of severe acute respiratory syndrome (SARS) influences the impact of coronavirus disease (COVID-19) on international tourism demand for four Asia-Pacific Economic Cooperation (APEC) economies, Taiwan, Hong Kong, Thailand, and New Zealand, over the 1 January–30 April 2020 period. To proceed, panel regression models are first applied with a time-lag effect to estimate the general effects of COVID-19 on daily tourist arrivals. In turn, the data set is decomposed into two nation groups and fixed effects models are employed for addressing the comparison of the pandemic-tourism relationship between economies with and without experiences of the SARS epidemic. Specifically, Taiwan and Hong Kong are grouped as economies with SARS experiences, while Thailand and New Zealand are grouped as countries without experiences of SARS. The estimation result indicates that the number of confirmed COVID-19 cases has a significant negative impact on tourism demand, in which a 1% COVID-19 case increase causes a 0.075% decline in tourist arrivals, which is a decline of approximately 110 arrivals for every additional person infected by the coronavirus. The negative impact of COVID-19 on tourist arrivals for Thailand and New Zealand is found much stronger than for Taiwan and Hong Kong. In particular, the number of tourist arrivals to Taiwan and Hong Kong decreased by 0.034% in response to a 1% increase in COVID-19 confirmed cases, while in Thailand and New Zealand, a 1% national confirmed cases increase caused a 0.103% reduction in tourism demand. Moreover, the effect of the number of domestic cases on international tourism is found lower than the effect caused by global COVID-19 mortality for the economies with SARS experiences. In contrast, tourist arrivals are majorly affected by the number of confirmed COVID-19 cases in Thailand and New Zealand. Finally, travel restriction in all cases is found to be the most influencing factor for the number of tourist arrivals. Besides contributing to the existing literature focusing on the knowledge regarding the nexus between tourism and COVID-19, the paper’s findings also highlight the importance of risk perception and the need of transmission prevention and control of the epidemic for the tourism sector.


Crisis ◽  
2018 ◽  
Vol 39 (1) ◽  
pp. 27-36 ◽  
Author(s):  
Kuan-Ying Lee ◽  
Chung-Yi Li ◽  
Kun-Chia Chang ◽  
Tsung-Hsueh Lu ◽  
Ying-Yeh Chen

Abstract. Background: We investigated the age at exposure to parental suicide and the risk of subsequent suicide completion in young people. The impact of parental and offspring sex was also examined. Method: Using a cohort study design, we linked Taiwan's Birth Registry (1978–1997) with Taiwan's Death Registry (1985–2009) and identified 40,249 children who had experienced maternal suicide (n = 14,431), paternal suicide (n = 26,887), or the suicide of both parents (n = 281). Each exposed child was matched to 10 children of the same sex and birth year whose parents were still alive. This yielded a total of 398,081 children for our non-exposed cohort. A Cox proportional hazards model was used to compare the suicide risk of the exposed and non-exposed groups. Results: Compared with the non-exposed group, offspring who were exposed to parental suicide were 3.91 times (95% confidence interval [CI] = 3.10–4.92 more likely to die by suicide after adjusting for baseline characteristics. The risk of suicide seemed to be lower in older male offspring (HR = 3.94, 95% CI = 2.57–6.06), but higher in older female offspring (HR = 5.30, 95% CI = 3.05–9.22). Stratified analyses based on parental sex revealed similar patterns as the combined analysis. Limitations: As only register-­based data were used, we were not able to explore the impact of variables not contained in the data set, such as the role of mental illness. Conclusion: Our findings suggest a prominent elevation in the risk of suicide among offspring who lost their parents to suicide. The risk elevation differed according to the sex of the afflicted offspring as well as to their age at exposure.


2013 ◽  
Vol 99 (4) ◽  
pp. 40-45 ◽  
Author(s):  
Aaron Young ◽  
Philip Davignon ◽  
Margaret B. Hansen ◽  
Mark A. Eggen

ABSTRACT Recent media coverage has focused on the supply of physicians in the United States, especially with the impact of a growing physician shortage and the Affordable Care Act. State medical boards and other entities maintain data on physician licensure and discipline, as well as some biographical data describing their physician populations. However, there are gaps of workforce information in these sources. The Federation of State Medical Boards' (FSMB) Census of Licensed Physicians and the AMA Masterfile, for example, offer valuable information, but they provide a limited picture of the physician workforce. Furthermore, they are unable to shed light on some of the nuances in physician availability, such as how much time physicians spend providing direct patient care. In response to these gaps, policymakers and regulators have in recent years discussed the creation of a physician minimum data set (MDS), which would be gathered periodically and would provide key physician workforce information. While proponents of an MDS believe it would provide benefits to a variety of stakeholders, an effort has not been attempted to determine whether state medical boards think it is important to collect physician workforce data and if they currently collect workforce information from licensed physicians. To learn more, the FSMB sent surveys to the executive directors at state medical boards to determine their perceptions of collecting workforce data and current practices regarding their collection of such data. The purpose of this article is to convey results from this effort. Survey findings indicate that the vast majority of boards view physician workforce information as valuable in the determination of health care needs within their state, and that various boards are already collecting some data elements. Analysis of the data confirms the potential benefits of a physician minimum data set (MDS) and why state medical boards are in a unique position to collect MDS information from physicians.


2019 ◽  
Vol 11 (1) ◽  
pp. 156-173
Author(s):  
Spenser Robinson ◽  
A.J. Singh

This paper shows Leadership in Energy and Environmental Design (LEED) certified hospitality properties exhibit increased expenses and earn lower net operating income (NOI) than non-certified buildings. ENERGY STAR certified properties demonstrate lower overall expenses than non-certified buildings with statistically neutral NOI effects. Using a custom sample of all green buildings and their competitive data set as of 2013 provided by Smith Travel Research (STR), the paper documents potential reasons for this result including increased operational expenses, potential confusion with certified and registered LEED projects in the data, and qualitative input. The qualitative input comes from a small sample survey of five industry professionals. The paper provides one of the only analyses on operating efficiencies with LEED and ENERGY STAR hospitality properties.


2019 ◽  
Vol 33 (3) ◽  
pp. 187-202
Author(s):  
Ahmed Rachid El-Khattabi ◽  
T. William Lester

The use of tax increment financing (TIF) remains a popular, yet highly controversial, tool among policy makers in their efforts to promote economic development. This study conducts a comprehensive assessment of the effectiveness of Missouri’s TIF program, specifically in Kansas City and St. Louis, in creating economic opportunities. We build a time-series data set starting 1990 through 2012 of detailed employment levels, establishment counts, and sales at the census block-group level to run a set of difference-in-differences with matching estimates for the impact of TIF at the local level. Although we analyze the impact of TIF on a wide set of indicators and across various industry sectors, we find no conclusive evidence that the TIF program in either city has a causal impact on key economic development indicators.


2017 ◽  
Vol 727 ◽  
pp. 447-449 ◽  
Author(s):  
Jun Dai ◽  
Hua Yan ◽  
Jian Jian Yang ◽  
Jun Jun Guo

To evaluate the aging behavior of high density polyethylene (HDPE) under an artificial accelerated environment, principal component analysis (PCA) was used to establish a non-dimensional expression Z from a data set of multiple degradation parameters of HDPE. In this study, HDPE samples were exposed to the accelerated thermal oxidative environment for different time intervals up to 64 days. The results showed that the combined evaluating parameter Z was characterized by three-stage changes. The combined evaluating parameter Z increased quickly in the first 16 days of exposure and then leveled off. After 40 days, it began to increase again. Among the 10 degradation parameters, branching degree, carbonyl index and hydroxyl index are strongly associated. The tensile modulus is highly correlated with the impact strength. The tensile strength, tensile modulus and impact strength are negatively correlated with the crystallinity.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yahya Albalawi ◽  
Jim Buckley ◽  
Nikola S. Nikolov

AbstractThis paper presents a comprehensive evaluation of data pre-processing and word embedding techniques in the context of Arabic document classification in the domain of health-related communication on social media. We evaluate 26 text pre-processings applied to Arabic tweets within the process of training a classifier to identify health-related tweets. For this task we use the (traditional) machine learning classifiers KNN, SVM, Multinomial NB and Logistic Regression. Furthermore, we report experimental results with the deep learning architectures BLSTM and CNN for the same text classification problem. Since word embeddings are more typically used as the input layer in deep networks, in the deep learning experiments we evaluate several state-of-the-art pre-trained word embeddings with the same text pre-processing applied. To achieve these goals, we use two data sets: one for both training and testing, and another for testing the generality of our models only. Our results point to the conclusion that only four out of the 26 pre-processings improve the classification accuracy significantly. For the first data set of Arabic tweets, we found that Mazajak CBOW pre-trained word embeddings as the input to a BLSTM deep network led to the most accurate classifier with F1 score of 89.7%. For the second data set, Mazajak Skip-Gram pre-trained word embeddings as the input to BLSTM led to the most accurate model with F1 score of 75.2% and accuracy of 90.7% compared to F1 score of 90.8% achieved by Mazajak CBOW for the same architecture but with lower accuracy of 70.89%. Our results also show that the performance of the best of the traditional classifier we trained is comparable to the deep learning methods on the first dataset, but significantly worse on the second dataset.


Author(s):  
David McCallen ◽  
Houjun Tang ◽  
Suiwen Wu ◽  
Eric Eckert ◽  
Junfei Huang ◽  
...  

Accurate understanding and quantification of the risk to critical infrastructure posed by future large earthquakes continues to be a very challenging problem. Earthquake phenomena are quite complex and traditional approaches to predicting ground motions for future earthquake events have historically been empirically based whereby measured ground motion data from historical earthquakes are homogenized into a common data set and the ground motions for future postulated earthquakes are probabilistically derived based on the historical observations. This procedure has recognized significant limitations, principally due to the fact that earthquake ground motions tend to be dictated by the particular earthquake fault rupture and geologic conditions at a given site and are thus very site-specific. Historical earthquakes recorded at different locations are often only marginally representative. There has been strong and increasing interest in utilizing large-scale, physics-based regional simulations to advance the ability to accurately predict ground motions and associated infrastructure response. However, the computational requirements for simulations at frequencies of engineering interest have proven a major barrier to employing regional scale simulations. In a U.S. Department of Energy Exascale Computing Initiative project, the EQSIM application development is underway to create a framework for fault-to-structure simulations. This framework is being prepared to exploit emerging exascale platforms in order to overcome computational limitations. This article presents the essential methodology and computational workflow employed in EQSIM to couple regional-scale geophysics models with local soil-structure models to achieve a fully integrated, complete fault-to-structure simulation framework. The computational workflow, accuracy and performance of the coupling methodology are illustrated through example fault-to-structure simulations.


2020 ◽  
Vol 71 (1) ◽  
pp. 15-41
Author(s):  
Dominik Maltritz ◽  
Sebastian Wüste

AbstractWe search for drivers of fiscal deficits in Europe using a data panel containing annual data of 27 EU countries in the years 1991–2012. Our special focus is on the influence of fiscal rules as well as on fiscal councils, i. e. institutions that may help to reduce deficits and enforce fiscal rules by advising governments. We distinguish between internal fiscal rules and external rules that result from EMU membership. In addition, we consider the impact of “creative accounting”, i. e. measures that help to circumvent fiscal rules, which we approximate by so called stock-flow-adjustments. We especially analyze the interactive influence of the mentioned variables on the budget balance.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Ian Glaspole ◽  
Francesco Bonella ◽  
Elena Bargagli ◽  
Marilyn K. Glassberg ◽  
Fabian Caro ◽  
...  

Abstract Background Idiopathic pulmonary fibrosis (IPF) predominantly affects individuals aged > 60 years who have several comorbidities. Nintedanib is an approved treatment for IPF, which reduces the rate of decline in forced vital capacity (FVC). We assessed the efficacy and safety of nintedanib in patients with IPF who were elderly and who had multiple comorbidities. Methods Data were pooled from five clinical trials in which patients were randomised to receive nintedanib 150 mg twice daily or placebo. We assessed outcomes in subgroups by age < 75 versus ≥ 75 years, by < 5 and ≥ 5 comorbidities, and by Charlson Comorbidity Index (CCI) ≤ 3 and > 3 at baseline. Results The data set comprised 1690 patients. Nintedanib reduced the rate of decline in FVC (mL/year) over 52 weeks versus placebo in patients aged ≥ 75 years (difference: 105.3 [95% CI 39.3, 171.2]) (n = 326) and < 75 years (difference 125.2 [90.1, 160.4]) (n = 1364) (p = 0.60 for treatment-by-time-by-subgroup interaction), in patients with < 5 comorbidities (difference: 107.9 [95% CI 65.0, 150.9]) (n = 843) and ≥ 5 comorbidities (difference 139.3 [93.8, 184.8]) (n = 847) (p = 0.41 for treatment-by-time-by-subgroup interaction) and in patients with CCI score ≤ 3 (difference: 106.4 [95% CI 70.4, 142.4]) (n = 1330) and CCI score > 3 (difference: 129.5 [57.6, 201.4]) (n = 360) (p = 0.57 for treatment-by-time-by-subgroup interaction). The adverse event profile of nintedanib was generally similar across subgroups. The proportion of patients with adverse events leading to treatment discontinuation was greater in patients aged ≥ 75 years than < 75 years in both the nintedanib (26.4% versus 16.0%) and placebo (12.2% versus 10.8%) groups. Similarly the proportion of patients with adverse events leading to treatment discontinuation was greater in patients with ≥ 5 than < 5 comorbidities (nintedanib: 20.5% versus 15.7%; placebo: 12.1% versus 10.0%). Conclusions Our findings suggest that the effect of nintedanib on reducing the rate of FVC decline is consistent across subgroups based on age and comorbidity burden. Proactive management of adverse events is important to reduce the impact of adverse events and help patients remain on therapy. Trial registration: ClinicalTrials.gov NCT00514683, NCT01335464, NCT01335477, NCT02788474, NCT01979952.


Sign in / Sign up

Export Citation Format

Share Document