model failure
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 12)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Jeremy Howard ◽  
Kalinda Griffiths ◽  
Rachel Thomas

The Doherty Model is being used in Australia to justify partial reopening with 70% of adults vaccinated. However, we have identified six critical failures of the model: failure to model uncertainties; failure to use appropriate premises; failure to model subgroup vaccine takeup; failure to correctly model child transmission; failure to include relevant outcomes; and failure to consider longer time-frames. These failures result in missing over 200,000 cases of long covid in children, underestimating death counts by a factor of up to ten, underestimating the severity of the delta variant by a factor of two, and greatly underestimating the potential downside risk.


2021 ◽  
Author(s):  
Amin Manouchehrian ◽  
Pinnaduwa H.S.W. Kulatilake ◽  
Rui Wu

Abstract Discontinuities are natural structures that exist in rocks and can affect the stability of rock structures. In this article, the influence of notch presence on the failure evolution around a hole in compressed rock specimens is investigated numerically. Firstly, the uniaxial compressive test on a rock specimen with a circular hole is modeled and the failure evolution in the specimen is simulated. In a separate model, notches are created at the surface of the hole. Results show that when the notches are created in the model, failure zone around the hole is transferred to a distance away from the surface of the hole. In addition, a parametric study is carried out to investigate the influence of the notch length and the confining pressure on the fracturing behavior of the specimen. Numerical results presented in this article indicate that the presence of notches at the surface of the hole and their dimensions can affect the fracturing mechanism of the specimen. In some cases, the failure at the boundary of the hole is prevented when the notches of certain dimensions are added to the hole. The insights gained from this numerical study may be helpful to control the failure around underground excavations.


2021 ◽  
Vol 15 (3) ◽  
pp. 301-312
Author(s):  
Nobuo Kochi ◽  
Sachiko Isobe ◽  
Atsushi Hayashi ◽  
Kunihiro Kodama ◽  
Takanari Tanabata ◽  
...  

Digital image phenotyping has become popular in plant research. Plants are complex in shape, and occlusion can often occur. Three-dimensional (3D) data are expected to measure the morphological traits of plants with higher accuracy. Plants have organs with flat and/or narrow shapes and similar component structures are repeated. Therefore, it is difficult to construct an accurate 3D model by applying methods developed for industrial materials and architecture. Here, we review noncontact and all-around 3D modeling and configuration of camera systems to measure the morphological traits of plants in terms of system composition, accuracy, cost, and usability. Typical noncontact 3D measurement methods can be roughly classified into active and passive methods. We describe their advantages and disadvantages. Structure-from-motion/multi-view stereo (SfM/MVS), a passive method, is the most frequently used measurement method for plants. It is described in terms of “forward intersection” and “backward resection.” We recently developed a novel SfM/MVS approach by mixing the forward and backward methods, and we provide a brief overview of our approach in this paper. While various fields are adopting 3D model construction, nonexpert users struggle to use them and end up selecting inadequate methods, which lead to model failure. We hope that this review will help users who are considering starting to construct and measure 3D models.


Metrika ◽  
2021 ◽  
Author(s):  
Berthold-Georg Englert ◽  
Michael Evans ◽  
Gun Ho Jang ◽  
Hui Khoon Ng ◽  
David Nott ◽  
...  

AbstractMultinomial models can be difficult to use when constraints are placed on the probabilities. An exact model checking procedure for such models is developed based on a uniform prior on the full multinomial model. For inference, a nonuniform prior can be used and a consistency theorem is proved concerning a check for prior-data conflict with the chosen prior. Applications are presented and a new elicitation methodology is developed for multinomial models with ordered probabilities.


2021 ◽  
Vol 36 (1) ◽  
pp. 21-37
Author(s):  
Christopher A. Kerr ◽  
Louis J. Wicker ◽  
Patrick S. Skinner

AbstractThe Warn-on-Forecast system (WoFS) provides short-term, probabilistic forecasts of severe convective hazards including tornadoes, hail, and damaging winds. WoFS initial conditions are created through frequent assimilation of radar (reflectivity and radial velocity), satellite, and in situ observations. From 2016 to 2018, 5-km radial velocity Cressman superob analyses were created to reduce the observation counts and subsequent assimilation computational costs. The superobbing procedure smooths the radial velocity and subsequently fails to accurately depict important storm-scale features such as mesocyclones. This study retrospectively assimilates denser, 3-km radial velocity analyses in lieu of the 5-km analyses for eight case studies during the spring of 2018. Although there are forecast improvements during and shortly after convection initiation, 3-km analyses negatively impact forecasts initialized when convection is ongoing, as evidenced by model failure and initiation of spurious convection. Therefore, two additional experiments are performed using adaptive assimilation of 3-km radial velocity observations. Initially, an updraft variance mask is applied that limits radial velocity assimilation to areas where the observations are more likely to be beneficial. This experiment reduces spurious convection as well as the number of observations assimilated, in some cases even below that of the 5-km analysis experiments. The masking, however, eliminates an advantage of 3-km radial velocity assimilation for convection initiation timing. This problem is mitigated by additionally assimilating 3-km radial velocity observations in locations where large differences exist between the observed and ensemble-mean reflectivity fields, which retains the benefits of the denser radial velocity analyses while reducing the number of observations assimilated.


2021 ◽  
Author(s):  
◽  
Ivan Banović

The problem under consideration is the earthquake impact on structures. The subject of the performed research is the efficiency of seismic base isolation using layers of predominantly natural materials below the foundation, as well as the development of a numerical model for seismic analysis of structures with such isolation. The aseismic layers below foundation are made of limestone sand - ASL-1, stone pebbles - ASL-2, and stone pebbles combined with layers of geogrid and geomembrane - ASL-3. The experimental research methodology is based on the use of shake-table and other modern equipment for dynamic and static testing of structures. Experiments were conducted on the basis of detailed research plan and program. Efficiency of the limestone sand layer - ASL-1 was tested on cantilever concrete columns, under seismic excitations up to failure, varying the sand thickness and intensity of seismic excitation. Influence of several layer parameters on the efficiency of stone pebble layer - ASL-2 was investigated. For each considered layer parameter, a rigid model M0 was exposed to four different accelerograms, with three levels of peak ground acceleration (0.2 g, 0.4 g and 0.6 g), while all other layer parameters were kept constant. On the basis of test results, the optimal pebble layer was adopted. Afterwards, the optimal ASL-2 efficiency was tested on various model parameters: stiffness (deformable models M1-M4), foundation size (small and large), excitation type (four earthquake accelerograms), and stress level in the model (elastic and up to failure). In the ASL-3 composite aseismic layer, the optimal ASL-2 is combined with a thin additional layer of sliding material (geogrid, geomembrane above limestone sand layer), in order to achieve greater efficiency of this layer than that of the ASL-2. A total of eleven different aseismic layers were considered. To determine the optimal ASL-3, the M0 model was used, like for the ASL-2. On the basis of test results, the optimal ASL-3 layer was adopted (one higher strength geogrid at the pebble layer top). The optimal ASL-3 is tested on various model parameters, analogous to the optimal ASL-2. A numerical model for reliable seismic analysis of concrete, steel, and masonry structures with seismic base isolation using ASL-2 was developed, with innovative constitutive model for seismic isolation. The model can simulate the main nonlinear effects of mentioned materials, and was verified on performed experimental tests. In relation to the rigid base - RB without seismic isolation, model based on the ASL-1 had an average reduction in seismic force and strain/stress by approximately 10% at lower PGA levels and approximately 14% at model failure. Due to the effect of sand calcification over time, the long-term seismic efficiency of such a layer is questionable. It was concluded that the aseismic layers ASL-2 and ASL-3 are not suitable for models of medium-stiff structure M3 and soft structure M4. In relation to the RB without seismic isolation, the M1 (very stiff structure) and M2 (stiff structure) based on the ASL-2 had an average reduction in seismic force and strain/stress by approximately 13% at lower PGA levels and approximately 25% at model failure. In relation to the RB without seismic isolation, the M1 and M2 based on the ASL-3 had an average reduction in seismic force and strain/stress by approximately 25% at lower PGA levels and approximately 34% at model failure. In relation to the RB without seismic isolation, the ASL-2 and ASL-3 did not result in major M1 and M2 model displacements, which was also favourable. It is concluded that the ASL-2 and especially ASL-3 have great potential for seismic base isolation of very stiff and stiff structures, as well as small bridges based on solid ground, but further research is needed. In addition, it was concluded that the developed numerical model has great potential for practical application. Finally, further verification of the created numerical model on the results of other experimental tests is needed, but also improvement of the developed constitutive models.


Author(s):  
Marco Bozzano ◽  
Alessandro Cimatti ◽  
Anthony Fernandes Pires ◽  
Alberto Griggio ◽  
Martin Jonáš ◽  
...  

AbstractThe process of developing civil aircraft and their related systems includes multiple phases of Preliminary Safety Assessment (PSA). An objective of PSA is to link the classification of failure conditions and effects (produced in the functional hazard analysis phases) to appropriate safety requirements for elements in the aircraft architecture. A complete and correct preliminary safety assessment phase avoids potentially costly revisions to the design late in the design process. Hence, automated ways to support PSA are an important challenge in modern aircraft design. A modern approach to conducting PSAs is via the use of abstract propagation models, that are basically hyper-graphs where arcs model the dependency among components, e.g. how the degradation of one component may lead to the degraded or failed operation of another. Such models are used for computing failure propagations: the fault of a component may have multiple ramifications within the system, causing the malfunction of several interconnected components. A central aspect of this problem is that of identifying the minimal fault combinations, also referred to as minimal cut sets, that cause overall failures.In this paper we propose an expressive framework to model failure propagation, catering for multiple levels of degradation as well as cyclic and nondeterministic dependencies. We define a formal sequential semantics, and present an efficient SMT-based method for the analysis of failure propagation, able to enumerate cut sets that are minimal with respect to the order between levels of degradation. In contrast with the state of the art, the proposed approach is provably more expressive, and dramatically outperforms other systems when a comparison is possible.


2020 ◽  
Vol 35 (8) ◽  
pp. 733-742 ◽  
Author(s):  
Vincent Chin ◽  
Noelle I. Samia ◽  
Roman Marchant ◽  
Ori Rosen ◽  
John P. A. Ioannidis ◽  
...  

2020 ◽  
Vol 7 (1) ◽  
pp. 337-360
Author(s):  
Jiming Jiang ◽  
J. Sunil Rao

A small area typically refers to a subpopulation or domain of interest for which a reliable direct estimate, based only on the domain-specific sample, cannot be produced due to small sample size in the domain. While traditional small area methods and models are widely used nowadays, there have also been much work and interest in robust statistical inference for small area estimation (SAE). We survey this work and provide a comprehensive review here. We begin with a brief review of the traditional SAE methods. We then discuss SAE methods that are developed under weaker assumptions and SAE methods that are robust in certain ways, such as in terms of outliers or model failure. Our discussion also includes topics such as nonparametric SAE methods, Bayesian approaches, model selection and diagnostics, and missing data. A brief review of software packages available for implementing robust SAE methods is also given.


Sign in / Sign up

Export Citation Format

Share Document