scholarly journals Assessment of embankment dams breaching using large scale physical modeling and statistical methods

Water Science ◽  
2018 ◽  
Vol 32 (2) ◽  
pp. 362-379 ◽  
Author(s):  
Muhammad Ashraf ◽  
Ahmed Hussein Soliman ◽  
Entesar El-Ghorab ◽  
Alaa El Zawahry
2017 ◽  
Vol 27 (9) ◽  
pp. 2872-2882 ◽  
Author(s):  
Zhuozhao Zhan ◽  
Geertruida H de Bock ◽  
Edwin R van den Heuvel

Clinical trials may apply or use a sequential introduction of a new treatment to determine its efficacy or effectiveness with respect to a control treatment. The reasons for choosing a particular switch design have different origins. For instance, they may be implemented for ethical or logistic reasons or for studying disease-modifying effects. Large-scale pragmatic trials with complex interventions often use stepped wedge designs (SWDs), where all participants start at the control group, and during the trial, the control treatment is switched to the new intervention at different moments. They typically use cross-sectional data and cluster randomization. On the other hand, new drugs for inhibition of cognitive decline in Alzheimer’s or Parkinson’s disease typically use delayed start designs (DSDs). Here, participants start in a parallel group design and at a certain moment in the trial, (part of) the control group switches to the new treatment. The studies are longitudinal in nature, and individuals are being randomized. Statistical methods for these unidirectional switch designs (USD) are quite complex and incomparable, and they have been developed by various authors under different terminologies, model specifications, and assumptions. This imposes unnecessary barriers for researchers to compare results or choose the most appropriate method for their own needs. This paper provides an overview of past and current statistical developments for the USDs (SWD and DSD). All designs are formulated in a unified framework of treatment patterns to make comparisons between switch designs easier. The focus is primarily on statistical models, methods of estimation, sample size calculation, and optimal designs for estimation of the treatment effect. Other relevant open issues are being discussed as well to provide suggestions for future research in USDs.


2017 ◽  
Vol 76 (3) ◽  
pp. 213-219 ◽  
Author(s):  
Johanna Conrad ◽  
Ute Nöthlings

Valid estimation of usual dietary intake in epidemiological studies is a topic of present interest. The aim of the present paper is to review recent literature on innovative approaches focussing on: (1) the requirements to assess usual intake and (2) the application in large-scale settings. Recently, a number of technology-based self-administered tools have been developed, including short-term instruments such as web-based 24-h recalls, mobile food records or simple closed-ended questionnaires that assess the food intake of the previous 24 h. Due to their advantages in terms of feasibility and cost-effectiveness these tools may be superior to conventional assessment methods in large-scale settings. New statistical methods have been developed to combine dietary information from repeated 24-h dietary recalls and FFQ. Conceptually, these statistical methods presume that the usual food intake of a subject equals the probability of consuming a food on a given day, multiplied by the average amount of intake of that food on a typical consumption day. Repeated 24-h recalls from the same individual provide information on consumption probability and amount. In addition, the FFQ can add information on intake frequency of rarely consumed foods. It has been suggested that this combined approach may provide high-quality dietary information. A promising direction for estimation of usual intake in large-scale settings is the integration of both statistical methods and new technologies. Studies are warranted to assess the validity of estimated usual intake in comparison with biomarkers.


Author(s):  
Arash Gobal ◽  
Bahram Ravani

The process of selective laser sintering (SLS) involves selective heating and fusion of powdered material using a moving laser beam. Because of its complicated manufacturing process, physical modeling of the transformation from powder to final product in the SLS process is currently a challenge. Existing simulations of transient temperatures during this process are performed either using finite-element (FE) or discrete-element (DE) methods which are either inaccurate in representing the heat-affected zone (HAZ) or computationally expensive to be practical in large-scale industrial applications. In this work, a new computational model for physical modeling of the transient temperature of the powder bed during the SLS process is developed that combines the FE and the DE methods and accounts for the dynamic changes of particle contact areas in the HAZ. The results show significant improvements in computational efficiency over traditional DE simulations while maintaining the same level of accuracy.


2021 ◽  
pp. 112827
Author(s):  
Zongwei Ma ◽  
Sagnik Dey ◽  
Sundar Christopher ◽  
Riyang Liu ◽  
Jun Bi ◽  
...  

2022 ◽  
pp. 1-21
Author(s):  
Clemens Krautwald ◽  
Hajo Von Häfen ◽  
Peter Niebuhr ◽  
Katrin Vögele ◽  
David Schürenkamp ◽  
...  

Author(s):  
Cheng Meng ◽  
Ye Wang ◽  
Xinlian Zhang ◽  
Abhyuday Mandal ◽  
Wenxuan Zhong ◽  
...  

With advances in technologies in the past decade, the amount of data generated and recorded has grown enormously in virtually all fields of industry and science. This extraordinary amount of data provides unprecedented opportunities for data-driven decision-making and knowledge discovery. However, the task of analyzing such large-scale dataset poses significant challenges and calls for innovative statistical methods specifically designed for faster speed and higher efficiency. In this chapter, we review currently available methods for big data, with a focus on the subsampling methods using statistical leveraging and divide and conquer methods.


2002 ◽  
Vol 13 (2) ◽  
pp. 105-119 ◽  
Author(s):  
Kenneth H. Pollock ◽  
James D. Nichols ◽  
Theodore R. Simons ◽  
George L. Farnsworth ◽  
Larissa L. Bailey ◽  
...  

2012 ◽  
Vol 12 (13) ◽  
pp. 5755-5771 ◽  
Author(s):  
A. Sanchez-Lorenzo ◽  
P. Laux ◽  
H.-J. Hendricks Franssen ◽  
J. Calbó ◽  
S. Vogl ◽  
...  

Abstract. Several studies have claimed to have found significant weekly cycles of meteorological variables appearing over large domains, which can hardly be related to urban effects exclusively. Nevertheless, there is still an ongoing scientific debate whether these large-scale weekly cycles exist or not, and some other studies fail to reproduce them with statistical significance. In addition to the lack of the positive proof for the existence of these cycles, their possible physical explanations have been controversially discussed during the last years. In this work we review the main results about this topic published during the recent two decades, including a summary of the existence or non-existence of significant weekly weather cycles across different regions of the world, mainly over the US, Europe and Asia. In addition, some shortcomings of common statistical methods for analyzing weekly cycles are listed. Finally, a brief summary of supposed causes of the weekly cycles, focusing on the aerosol-cloud-radiation interactions and their impact on meteorological variables as a result of the weekly cycles of anthropogenic activities, and possible directions for future research, is presented.


Sign in / Sign up

Export Citation Format

Share Document