empirical procedure
Recently Published Documents


TOTAL DOCUMENTS

96
(FIVE YEARS 25)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Vol 5 (4) ◽  
pp. 105
Author(s):  
Jan Kalich ◽  
Uwe Füssel

The multi-material design and the adaptability of a modern process chain require joining connections with specifically adjustable mechanical, thermal, chemical, or electrical properties. Previous considerations primarily focused on the mechanical properties. The multitude of possible combinations of requirements, materials, and component- and joining-geometry makes an empirical determination of these joining properties for the clinching process impossible. Based on the established and empirical procedure, there is currently no model that takes into account all questions of joinability—i.e., the materials (suitability for joining), design (security of joining), and production (joining possibility)—that allows a calculation of the properties that can be achieved. It is therefore necessary to describe the physical properties of the joint as a function of the three binding mechanisms—form closure, force closure, and material closure—in relation to the application. This approach illustrates the relationships along the causal chain “joint requirement-binding mechanism-joining parameters” and improves the adaptability of the mechanical joining technology. Geometrical properties of clinch connections of the combination of aluminum and steel are compared in a metallographic cross-section. The mechanical stress state of the rotationally symmetrical clinch points is qualified with a torsion test and by measuring the electrical resistance in the base material, in the clinch joint, and during the production cycle (after clinching, before precipitation hardening and after precipitation hardening).


Author(s):  
Vo Hoang Nguyen ◽  
Nguyen Huu Bao ◽  
Huynh Dinh Chuong ◽  
Nguyen Duy Thong ◽  
Tran Thien Thanh ◽  
...  

2021 ◽  
Author(s):  
Ryan S Paquin ◽  
Vanessa Boudewyns ◽  
Kevin R Betts ◽  
Mihaela Johnson ◽  
Amie C O’Donoghue ◽  
...  

Abstract Although misleading health information is not a new phenomenon, no standards exist to assess consumers’ ability to detect and subsequently reject misinformation. Part of this deficit reflects theoretical and measurement challenges. After drawing novel connections among legal, regulatory, and philosophical perspectives on false, misleading or deceptive advertising and cognitive-process models of persuasive communication, we define deception and misinformation rejection. Recognizing that individuals can hold beliefs that align with a persuasive message without those beliefs having been influenced by it, we derive empirical criteria to test for evidence of these constructs that center on yielding or not yielding to misinformation in mediated contexts. We present data from an experimental study to illustrate the proposed test procedure and provide evidence for two theoretically derived patterns indicative of misinformation rejection. The resulting definitions and empirical procedure set the stage for additional theorizing and empirical studies on misinformation in the marketplace.


2021 ◽  
Author(s):  
Ali Shahabi ◽  
Andrew Lemmon ◽  
Brian DeBoi ◽  
Troy Beechner ◽  
Robert Mayo

2021 ◽  
Author(s):  
Mark J. Vanarelli

Abstract The construction of underground excavations and tunnels can only be done safely and economically when the subsurface conditions are adequately understood. Excessive groundwater inflows into rock tunnels under construction can injury personnel and terminate the construction of the tunnel project. Therefore, it is important to accurately estimate groundwater inflows into rock tunnels. A semi-empirical procedure/method for estimating steady-state, groundwater inflows in shallow rock tunnels is presented and discussed in this paper. In addition, this paper presents two case study analyses which include the Elizabethtown tunnel in New Jersey and the Toledo tunnel in Ohio. Packer test (i.e., pressure test) data was analyzed for both case studies utilizing this semi-empirical procedure. This paper reviews the theory behind the procedure, summarizes validates the procedure through case study analyses. It also describes previous proposed modifications and clarifies the need for any such modifications. In general, good groundwater inflow estimates were derived for shallow rock tunnels utilizing this semi-empirical procedure/method.


2021 ◽  
Author(s):  
Vincenzo L. Pascali

AbstractSingle nucleotide polymorphisms (SNPs) are useful forensic markers. When a SNPs-based forensic protocol targets a body fluid stain, it returns elementary evidence regardless of the number of individuals that might have contributed to the stain deposition. Therefore, drawing inference from a mixed stain with SNPs is different than drawing it while using multinomial polymorphisms. We here revisit this subject, with a view to contribute to a fresher insight into it. First, we manage to model conditional semi-continuous likelihoods in terms of matrices of genotype permutations vs number of contributors (NTZsc). Secondly, we redefine some algebraic formulas to approach the semi-continuous calculation. To address allelic dropouts, we introduce a peak height ratio index (‘h’, or: the minor read divided by the major read at any NGS-based typing result) into the semi-continuous formulas, for they to act as an acceptable proxy of the ‘split drop’ (Haned et al, 2012) model of calculation. Secondly, we introduce a new, empirical method to deduct the expected quantitative ratio at which the contributors of a mixture have originally mixed and the observed ratio generated by each genotype combination at each locus. Compliance between observed and expected quantity ratios is measured in terms of (1-χ2) values at each state of a locus deconvolution. These probability values are multiplied, along with the h index, to the relevant population probabilities to weigh the overall plausibility of each combination according to the quantitative perspective. We compare calculation performances of our empirical procedure (NITZq) with those of the EUROFORMIX software ver. 3.0.3. NITZq generates LR values a few orders of magnitude lower than EUROFORMIX when true contributors are used as POIs, but much lower LR values when false contributors are used as POIs. NITZ calculation routines may be useful, especially in combination with mass genomics typing protocols.


2020 ◽  
Vol 642 ◽  
pp. A197
Author(s):  
B. Dias ◽  
M. C. Parisi

Context. The line strength of the near-infrared Ca II triplet (CaT) lines are a proxy for measuring metallicity from integrated and individual stellar spectra of bright red giant stars. In the latter case it is a mandatory step to remove the magnitude (proxy for gravity, temperature, and luminosity) dependence from the equivalent width (EW) of the lines before converting them into metallicities. For decades the working empirical procedure has been to use the relative magnitude with respect to the horizontal branch level or red clump, with the advantage that it is independent from distance and extinction. Aims. The V filter is broadly adopted as the reference magnitude, although a few works have used different filters (I and Ks, for example). In this work we investigate the dependence of the CaT calibration using the griz filters from the Dark Energy Camera (DECam) and the Gemini Multi-Object Spectrograph (GMOS), the G filter from Gaia, the BVI filters from the Magellanic Clouds photometric survey (MCPS), and the YJKs filters from the Visible and Infrared Survey Telescope for Astronomy (VISTA) InfraRed CAMera (VIRCAM). We use as a reference the FOcal Reducer and low dispersion Spectrograph 2 (FORS2) V filter used in the original analysis of the sample. Methods. Red giant stars from clusters with known metallicity and available CaT EWs were used as reference. Public photometric catalogues were taken from the Survey of the MAgellanic Stellar History (SMASH) second data release, VISTA survey of the Magellanic Clouds system (VMC), Gaia, MCPS surveys, plus VIsible Soar photometry of star Clusters in tApi’i and Coxi HuguA (VISCACHA) GMOS data, for a selection of Small Magellanic Cloud clusters. The slopes were fitted using two and three lines to be applicable to most of the metallicity scales. Results. The magnitude dependence of the CaT EWs is well described by a linear relation using any filter analysed in this work. The slope increases with wavelength of the filters. The zero point (i.e. reduced EW), which is the metallicity indicator, remains the same. Conclusions. If the same line profile function is used with the same bandpasses and continuum regions, and the total EW comes from the same number of lines (2 or 3), then the reduced EW is the same regardless the filter used. Therefore, any filter can be used to convert the CaT equivalent widths into metallicity for a given CaT calibration.


2020 ◽  
Vol 35 (4) ◽  
pp. 1545-1560 ◽  
Author(s):  
Eugene W. McCaul ◽  
Georgios Priftis ◽  
Jonathan L. Case ◽  
Themis Chronis ◽  
Patrick N. Gatlin ◽  
...  

AbstractThe Lightning Forecasting Algorithm (LFA), a simple empirical procedure that transforms kinematic and microphysical fields from explicit-convection numerical models into mapped fields of estimated total lightning flash origin density, has been incorporated into operational forecast models in recent years. While several changes designed to improve LFA accuracy and reliability have been implemented, the basic linear relationship between model proxy amplitudes and diagnosed total lightning flash rate densities remains unchanged. The LFA has also been added to many models configured with microphysics and boundary layer parameterizations different from those used in the original study, suggesting the need for checks of the LFA calibration factors. To assist users, quantitative comparisons of LFA output for some commonly used model physics choices are performed. Results are reported here from a 12-member ensemble that combines four microphysics with three boundary layer schemes, to provide insight into the extent of LFA output variability. Data from spring 2018 in Nepal–Bangladesh–India show that across the ensemble of forecasts in the entire three-month period, the LFA peak flash rate densities all fell within a factor of 1.21 of well-calibrated LFA-equipped codes, with most schemes failing to show differences that are statistically significant. Sensitivities of threat areal coverage are, however, larger, suggesting substantial variation in the amounts of ice species produced in storm anvils by the various microphysics schemes. Current explicit-convection operational models in the United States employ schemes that are among those exhibiting the larger biases. For users seeking optimum performance, we present recommended methods for recalibrating the LFA.


2020 ◽  
Vol 53 (4) ◽  
pp. 1154-1162 ◽  
Author(s):  
Matthew G. Reeves ◽  
Peter A. Wood ◽  
Simon Parsons

The interpretation of crystal structures in terms of intermolecular interaction energies enables phase stability and polymorphism to be rationalized in terms of quantitative thermodynamic models, while also providing insight into the origin of physical and chemical properties including solubility, compressibility and host–guest formation. The Pixel method is a semi-empirical procedure for the calculation of intermolecular interactions and lattice energies based only on crystal structure information. Molecules are represented as blocks of undistorted ab initio molecular electron and nuclear densities subdivided into small volume elements called pixels. Electrostatic, polarization, dispersion and Pauli repulsion terms are calculated between pairs of pixels and nuclei in different molecules, with the accumulated sum equating to the intermolecular interaction energy, which is broken down into physically meaningful component terms. The MrPIXEL procedure enables Pixel calculations to be carried out with minimal user intervention from the graphical interface of Mercury, which is part of the software distributed with the Cambridge Structural Database (CSD). Following initial setup of a crystallographic model, one module assigns atom types and writes necessary input files. A second module then submits the required electron-density calculation either locally or to a remote server, downloads the results, and submits the Pixel calculation itself. Full lattice energy calculations can be performed for structures with up to two molecules in the crystallographic asymmetric unit. For more complex cases, only molecule–molecule energies are calculated. The program makes use of the CSD Python API, which is also distributed with the CSD.


Sign in / Sign up

Export Citation Format

Share Document