scholarly journals Experimental Data for the Validation of Numerical Methods: DrivAer Model

Fluids ◽  
2020 ◽  
Vol 5 (4) ◽  
pp. 236
Author(s):  
Max Varney ◽  
Martin Passmore ◽  
Felix Wittmeier ◽  
Timo Kuthada

As the automotive industry strives to increase the amount of digital engineering in the product development process, cut costs and improve time to market, the need for high quality validation data has become a pressing requirement. While there is a substantial body of experimental work published in the literature, it is rarely accompanied by access to the data and a sufficient description of the test conditions for a high quality validation study. This paper addresses this by reporting on a comprehensive series of measurements for a 25% scale model of the DrivAer automotive test case. The paper reports on the measurement of the forces and moments, pressures and off body PIV measurements for three rear end body configurations, and summarises and compares the results. A detailed description of the test conditions and wind tunnel set up are included along with access to the full data set.

2021 ◽  
Author(s):  
William David Day

Abstract As pressure ratios and firing temperatures continue to rise, creep becomes of greater concern everywhere within a gas turbine engine. As a rule of thumb, just a 14°C increase in metal temperature can halve the expected rupture life of a part. In the past, companies might be satisfied with conservative creep estimates based on Larson-Miller-Parameter curves and 1D calculations. Now companies need functional implicit-creep models with finite element analysis for an ever-increasing number of materials. Obtaining adequate test data to create a good creep prediction model is an expensive and time-consuming proposition. Test costs depend on temperature, material, and location, but a single, 10,000hr, rupture test may reasonably be expected to cost > $20,000. Other than large OEMs, small companies and individuals lack the resources to create creep models from their own data. This paper will lead the reader through the creation of a modified theta projection creep model of Haynes 282, a high-temperature, combustion alloy, using only literature data. First, literature data is collected and reviewed. Data consists of very few complete curves, estimated stresses for rupture and 1% strain, and discrete times to individual strains for individual tests. When adequate data exists, individual tests are fit to theta projection model curves. These “local” theta fits of different test conditions are used as input for the global model. Global fits of theta parameters, as a function of stress and temperature, are made from the full data set. As the global creep model is improved, correction factors introduced to account for true stress and strain effects. A statistical analysis is made of actual rupture time versus predicted onset of failure time, theta5=1. A time-based scatter factor is determined to evaluate temperature margin required to ensure reliability. After the creep model was completed, Haynes International, the material inventor, provided specific test conditions (stress and temperature) of 5 tests that had already been run. Creep predictions were generated for these test conditions, before viewing the actual results. The creep model predicted strain curves matched actual tests very well, both in shape and time to rupture. Continued refinement is possible as more data is acquired.


Ocean Science ◽  
2019 ◽  
Vol 15 (2) ◽  
pp. 291-305
Author(s):  
Luc Vandenbulcke ◽  
Alexander Barth

Abstract. Traditionally, in order for lower-resolution, global- or basin-scale (regional) models to benefit from some of the improvements available in higher-resolution subregional or coastal models, two-way nesting has to be used. This implies that the parent and child models have to be run together and there is an online exchange of information between both models. This approach is often impossible in operational systems where different model codes are run by different institutions, often in different countries. Therefore, in practice, these systems use one-way nesting with data transfer only from the parent model to the child models. In this article, it is examined whether it is possible to replace the missing feedback (coming from the child model) by data assimilation, avoiding the need to run the models simultaneously. Selected variables from the high-resolution simulation will be used as pseudo-observations and assimilated into the low-resolution models. This method will be called “upscaling”. A realistic test case is set up with a model covering the Mediterranean Sea, and a nested model covering its north-western basin. Under the hypothesis that the nested model has better prediction skills than the parent model, the upscaling method is implemented. Two simulations of the parent model are then compared: the case of one-way nesting (or a stand-alone model) and a simulation using the upscaling technique on the temperature and salinity variables. It is shown that the representation of some processes, such as the Rhône River plume, is strongly improved in the upscaled model compared to the stand-alone model.


2018 ◽  
Author(s):  
Luc Vandenbulcke ◽  
Alexander Barth

Abstract. Traditionnally, in order for lower-resolution, global- or basin-scale models to benefit from some of the improvements available in higher-resolution regional or coastal models, two-way nesting has to be used. This implies that the parent and child models have to be run together and there is an online exchange of information between both models. This approach is often impossible in operational systems, where different model codes are run by different institutions, often in different countries. Therefor, in practice, these systems use one-way nesting with data transfer only from the large-scale model to the regional models. In this article, it is examined whether it is possible to replace the missing model feedback by data assimilation, avoiding the need to run the models simultaneously. Selected variables from the high-resolution forecasts will be used as pseudo-observations, and assimilated in the lower-resolution models. The method will be called upscaling. A realistic test-case is set up with a model covering the Mediterranean Sea, and a nested model covering its North-Western basin. A simulation using only the basin-scale model is compared with a simulation where both models are run using one-way nesting, and using the upscaling technique on the temperature and salinity variables. It is shown that the representation of some processes, such as the Rhône river plume, are strongly improved in the upscaled model compared to the stand-alone model.


2014 ◽  
Vol 70 (a1) ◽  
pp. C695-C695
Author(s):  
Trixie Wagner ◽  
Markus Kroemer ◽  
Berthold Grunwald

In the pharmaceutical industry crystal structures of low molecular weight compounds are analyzed for a variety of reasons: absolute structure determination, proof of constitution, characterization of different polymorphic forms, obtaining three-dimensional models as starting points for the study of structure-activity relationships, etc. Not for all purposes highly redundant, high-resolution data sets are needed; this thought, together with the purchase of a new hybrid pixel detector which can be operated in a very fast shutterless mode, initiated the idea to test how many useable crystal structures we can produce within 24 hours. Our goal was to invest as little effort as possible and to set up an automated process with minimal human intervention but a maximum chance of success which we defined as getting to a correct final result providing useful information, e.g., is it the correct compound? Is the sample chiral or racemic? Which crystal would be the best one for a full data collection? Is it a new polymorph? We selected a data collection protocol which yields an interpretable data set up to 1 Å resolution in less than 10 minutes, the diffraction images are indexed and processed using an in-house script arching over the necessary individual XDS steps, followed by space group determination (XPREP) and structure solution/refinement (SHELX). First results and findings of our experiment series will be presented and the adjustable parameters will be discussed. Ideas to adapt and improve the process will be offered.


2010 ◽  
Vol 298 (2) ◽  
pp. E229-E236 ◽  
Author(s):  
Pooja Singal ◽  
Ranganath Muniyappa ◽  
Robin Chisholm ◽  
Gail Hall ◽  
Hui Chen ◽  
...  

After a constant insulin infusion is initiated, determination of steady-state conditions for glucose infusion rates (GIR) typically requires ≥3 h. The glucose infusion follows a simple time-dependent rise, reaching a plateau at steady state. We hypothesized that nonlinear fitting of abbreviated data sets consisting of only the early portion of the clamp study can provide accurate estimates of steady-state GIR. Data sets from two independent laboratories were used to develop and validate this approach. Accuracy of the predicted steady-state GDR was assessed using regression analysis and Altman-Bland plots, and precision was compared by applying a calibration model. In the development data set ( n = 88 glucose clamp studies), fitting the full data set with a simple monoexponential model predicted reference GDR values with good accuracy (difference between the 2 methods −0.37 mg·kg−1·min−1) and precision [root mean square error (RMSE) = 1.11], validating the modeling procedure. Fitting data from the first 180 or 120 min predicted final GDRs with comparable accuracy but with progressively reduced precision [fitGDR-180 RMSE = 1.27 ( P = NS vs. fitGDR-full); fitGDR-120 RMSE = 1.56 ( P < 0.001)]. Similar results were obtained with the validation data set ( n = 183 glucose clamp studies), confirming the generalizability of this approach. The modeling approach also derives kinetic parameters that are not available from standard approaches to clamp data analysis. We conclude that fitting a monoexponential curve to abbreviated clamp data produces steady-state GDR values that accurately predict the GDR values obtained from the full data sets, albeit with reduced precision. This approach may help reduce the resources required for undertaking clamp studies.


2020 ◽  
Author(s):  
Wei Yang ◽  
Jacob Schreiber ◽  
Jeffrey Bilmes ◽  
William Stafford Noble

AbstractAnalyzing and sharing massive single-cell RNA-seq data sets can be facilitated by creating a “sketch” of the data—a selected subset of cells that accurately represent the full data set. Using an existing benchmark, we demonstrate the utility of submodular optimization in efficiently creating high quality sketches of scRNA-seq data.


2015 ◽  
Vol 6 (2) ◽  
pp. 253-274
Author(s):  
Vered Noam

The rabbinic halakhic system, with its many facets and the literary works that comprise it, reflects a new Jewish culture, almost completely distinct in its halakhic content and scope from the biblical and postbiblical culture that preceded it. By examining Jewish legislation in the area of corpse impurity as a test case, the article studies the implications of Qumranic halakhah, as a way-station between the Bible and the Mishnah, for understanding how Tannaitic halakhah developed. The impression obtained from the material reviewed in the article is that the direction of the “Tannaitic revolution” was charted, its methods set up, and its principles established, at a surprisingly early stage, before the destruction of the Second Temple, and thus at the same time that the Qumran literature was created.


BMJ Open ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. e040778
Author(s):  
Vineet Kumar Kamal ◽  
Ravindra Mohan Pandey ◽  
Deepak Agrawal

ObjectiveTo develop and validate a simple risk scores chart to estimate the probability of poor outcomes in patients with severe head injury (HI).DesignRetrospective.SettingLevel-1, government-funded trauma centre, India.ParticipantsPatients with severe HI admitted to the neurosurgery intensive care unit during 19 May 2010–31 December 2011 (n=946) for the model development and further, data from same centre with same inclusion criteria from 1 January 2012 to 31 July 2012 (n=284) for the external validation of the model.Outcome(s)In-hospital mortality and unfavourable outcome at 6 months.ResultsA total of 39.5% and 70.7% had in-hospital mortality and unfavourable outcome, respectively, in the development data set. The multivariable logistic regression analysis of routinely collected admission characteristics revealed that for in-hospital mortality, age (51–60, >60 years), motor score (1, 2, 4), pupillary reactivity (none), presence of hypotension, basal cistern effaced, traumatic subarachnoid haemorrhage/intraventricular haematoma and for unfavourable outcome, age (41–50, 51–60, >60 years), motor score (1–4), pupillary reactivity (none, one), unequal limb movement, presence of hypotension were the independent predictors as its 95% confidence interval (CI) of odds ratio (OR)_did not contain one. The discriminative ability (area under the receiver operating characteristic curve (95% CI)) of the score chart for in-hospital mortality and 6 months outcome was excellent in the development data set (0.890 (0.867 to 912) and 0.894 (0.869 to 0.918), respectively), internal validation data set using bootstrap resampling method (0.889 (0.867 to 909) and 0.893 (0.867 to 0.915), respectively) and external validation data set (0.871 (0.825 to 916) and 0.887 (0.842 to 0.932), respectively). Calibration showed good agreement between observed outcome rates and predicted risks in development and external validation data set (p>0.05).ConclusionFor clinical decision making, we can use of these score charts in predicting outcomes in new patients with severe HI in India and similar settings.


Processes ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 1178
Author(s):  
Zhenhua Wang ◽  
Beike Zhang ◽  
Dong Gao

In the field of chemical safety, a named entity recognition (NER) model based on deep learning can mine valuable information from hazard and operability analysis (HAZOP) text, which can guide experts to carry out a new round of HAZOP analysis, help practitioners optimize the hidden dangers in the system, and be of great significance to improve the safety of the whole chemical system. However, due to the standardization and professionalism of chemical safety analysis text, it is difficult to improve the performance of traditional models. To solve this problem, in this study, an improved method based on active learning is proposed, and three novel sampling algorithms are designed, Variation of Token Entropy (VTE), HAZOP Confusion Entropy (HCE) and Amplification of Least Confidence (ALC), which improve the ability of the model to understand HAZOP text. In this method, a part of data is used to establish the initial model. The sampling algorithm is then used to select high-quality samples from the data set. Finally, these high-quality samples are used to retrain the whole model to obtain the final model. The experimental results show that the performance of the VTE, HCE, and ALC algorithms are better than that of random sampling algorithms. In addition, compared with other methods, the performance of the traditional model is improved effectively by the method proposed in this paper, which proves that the method is reliable and advanced.


Sign in / Sign up

Export Citation Format

Share Document