empirical calibration
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 23)

H-INDEX

30
(FIVE YEARS 5)

Author(s):  
Zarko Y. Kalamov ◽  
Marco Runkel

AbstractIf an individual’s health costs are U-shaped in weight with a minimum at some healthy level and if the individual has both self-control problems and rational motives for over- or underweight, the optimal paternalistic tax on calorie intake mitigates the individual’s weight problem (intensive margin), but does not induce the individual to choose healthy weight (extensive margin). Implementing healthy weight by a calorie tax is not only inferior to paternalistic taxation, but may even be worse than not taxing the individual at all. With heterogeneous individuals, the optimal uniform paternalistic tax may have the negative side effect of reducing calorie intake of the under- and normal weights. We confirm these theoretical insights by an empirical calibration to US adults.


2021 ◽  
Author(s):  
Hon Hwang ◽  
Juan C Quiroz ◽  
Blanca Gallego

Abstract Background: Estimations of causal effects from observational data are subject to various sources of bias. These biases can be adjusted by using negative control outcomes not affected by the treatment. The empirical calibration procedure uses negative controls to calibrate p-values and both negative and positive controls to calibrate coverage of the 95% confidence interval of the outcome of interest. Although empirical calibration has been used in several large observational studies, there is no systematic examination of its effect under different bias scenarios. Methods: The effect of empirical calibration of confidence intervals was analyzed using simulated datasets with known treatment effects. The simulations were for binary treatment and binary outcome, with simulated biases resulting from unmeasured confounder, model misspecification, measurement error, and lack of positivity. The performance of empirical calibration was evaluated by determining the change of the confidence interval coverage and bias of the outcome of interest. Results: Empirical calibration increased coverage of the outcome of interest by the 95% confidence interval under most settings but was inconsistent in adjusting the bias of the outcome of interest. Empirical calibration was most effective when adjusting for unmeasured confounding bias. Suitable negative controls had a large impact on the adjustment made by empirical calibration, but small improvements in the coverage of the outcome of interest was also observable when using unsuitable negative controls. Conclusions: This work adds evidence to the efficacy of empirical calibration on calibrating the confidence intervals of treatment effects in observational studies. We recommend empirical calibration of confidence intervals, especially when there is a risk of unmeasured confounding.


2021 ◽  
Author(s):  
Martijn J Schuemie ◽  
Faaizah Arshad ◽  
Nicole Pratt ◽  
Fredrik Nyberg ◽  
Thamir M Alshammari ◽  
...  

Background: Routinely collected healthcare data such as administrative claims and electronic health records (EHR) can complement clinical trials and spontaneous reports when ensuring the safety of vaccines, but uncertainty remains about what epidemiological design to use. Methods: Using 3 claims and 1 EHR database, we evaluate several variants of the case-control, comparative cohort, historical comparator, and self-controlled designs against historical vaccinations with real negative control outcomes (outcomes with no evidence to suggest that they could be caused by the vaccines) and simulated positive controls. Results: Most methods show large type 1 error, often identifying false positive signals. The cohort method appears either positively or negatively biased, depending on the choice of comparator index date. Empirical calibration using effect-size estimates for negative control outcomes can restore type 1 error to close to nominal, often at the cost of increasing type 2 error. After calibration, the self-controlled case series (SCCS) design shows the shortest time to detection for small true effect sizes, while the historical comparator performs well for strong effects. Conclusions: When applying any method for vaccine safety surveillance we recommend considering the potential for systematic error, especially due to confounding, which for many designs appears to be substantial. Adjusting for age and sex alone is likely not sufficient to address the differences between vaccinated and unvaccinated, and for the cohort method the choice of index date plays an important role in the comparability of the groups Inclusion of negative control outcomes allows both quantification of the systematic error and, if so desired, subsequent empirical calibration to restore type 1 error to its nominal value. In order to detect weaker signals, one may have to accept a higher type 1 error.


Hypertension ◽  
2021 ◽  
Vol 77 (5) ◽  
pp. 1528-1538
Author(s):  
Seng Chan You ◽  
Harlan M. Krumholz ◽  
Marc A. Suchard ◽  
Martijn J. Schuemie ◽  
George Hripcsak ◽  
...  

Evidence for the effectiveness and safety of the third-generation β-blockers other than atenolol in hypertension remains scarce. We assessed the effectiveness and safety of β-blockers as first-line treatment for hypertension using 3 databases in the United States: 2 administrative claims databases and 1 electronic health record–based database from 2001 to 2018. In each database, comparative effectiveness of β-blockers for the risks of acute myocardial infarction, stroke, and hospitalization for heart failure was assessed, using large-scale propensity adjustment and empirical calibration. Estimates were combined across databases using random-effects meta-analyses. Overall, 118 133 and 267 891 patients initiated third-generation β-blockers (carvedilol and nebivolol) or atenolol, respectively. The pooled hazard ratios (HRs) of acute myocardial infarction, stroke, hospitalization for heart failure, and most metabolic complications were not different between the third-generation β-blockers versus atenolol after propensity score matching and empirical calibration (HR, 1.07 [95% CI, 0.74–1.55] for acute myocardial infarction; HR, 1.06 [95% CI, 0.87–1.31] for stroke; HR, 1.46 [95% CI, 0.99–2.24] for hospitalized heart failure). Third-generation β-blockers were associated with significantly higher risk of stroke than ACE (angiotensin-converting enzyme) inhibitors (HR, 1.29 [95% CI, 1.03–1.72]) and thiazide diuretics (HR, 1.56 [95% CI, 1.17–2.20]). In conclusion, this study found many patients with first-line β-blocker monotherapy for hypertension and no statistically significant differences in the effectiveness and safety comparing atenolol with third-generation β-blockers. Patients on third-generation β-blockers had a higher risk of stroke than those on ACE inhibitors and thiazide diuretics.


2021 ◽  
Vol 27 ◽  
pp. 199
Author(s):  
E. Ntalla ◽  
A. Markopoulos ◽  
K. Karfopoulos ◽  
C. Potiriadis ◽  
A. Clouvas ◽  
...  

In this study, a semi-empirical calibration method for NORM samples measurement by using a LaBr3(Ce) scintillator was developed based on a combination of experimental gamma spectrometry measurements and MCNP-X simulations. The aim of this work is to provide us with full energy peak efficiency calibration curves in a wide photon energy range which is of particular importance when selected photon energies of 234Th, 214Pb, 214Bi, 228Ac, 208Tl and 226Ra are to be measured with accuracy.


2021 ◽  
Author(s):  
Edmund Chattoe-Brown ◽  
Simone GABBRIELLINI

This article argues that the potential of Agent-Based Modelling (the capability for empirical justification of computer programmes representing social processes as dynamically unfolding individual cognition, action and interaction to reveal emerging aggregate outcomes) is not yet fully realised in the scientific study of social networks. By critically analysing several existing studies, it shows why the technique’s distinctive methodology (involving empirical calibration and validation) is just as important to its scientific contribution as its novel technical capabilities. The article shows the advantages of Agent-Based Models following this methodology and distinguishes these clearly from the implications of apparently similar techniques (like actor-based approaches). The article also discusses the limitations of existing Agent-Based Modelling applied to social networks, enabling the approach to make a more effective contribution to Network Science in future.


SAGE Open ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 215824402110271
Author(s):  
Ibrar Hussain ◽  
Jawad Hussain ◽  
Arshad Ali ◽  
Shabir Ahmad

This study claims to be the first in assessing the short-run and long-run impacts of both the size and composition of fiscal adjustment on the growth in Pakistan. Empirical calibration has been made on Mankiw et al.’s model, while the Autoregressive Distributed Lag (ARDL) techniques of Pesaran et al. have been employed to carry out the estimation. To cure the problem of degenerate cases, the ARDL techniques have been augmented with the model of Sam et al. The analysis supports the hypothesis of “expansionary fiscal contraction” in the long run. The analysis reveals that the spending-based adjustment enhances the economic growth, whereas the tax-based adjustment would reduce the growth in the long run in the case of Pakistan. The Granger causality test indicates that the fiscal adjustments have been weakly exogenous, thereby allowing feedback effect from the economic growth toward the fiscal adjustment. Thus, the objective of sustained economic growth can be achieved through the spending-based consolidation measures.


2021 ◽  
Author(s):  
Sebastian B. Simonsen ◽  
Valentina R. Barletta ◽  
William Colgan ◽  
Louise Sandberg Sørensen

<p>Satellite altimeters have monitored the surface elevation change of the Greenland ice sheet since 1978 and with an ice-sheet wide coverage since 1991. The satellite altimeters of interest for Greenland mass balance studies operate at different wavelengths; Ku-band radar, Ka-band radar, infrared laser, and visible laser. Some of the applied wavelengths can penetrate the surface in snow-covered regions and map the elevation change of subsurface layers. Especially the longer radar wavelength can penetrate the upper meters of the snow cover, whereas the infrared laser measurements from ICESat observes the snow-air interface of ice sheets. The pure surface elevation change derived from ICESat has been widely used in mass balance studies and may provide a benchmark for altimetric mass balance estimates after being corrected for changes in the firn-air content. The Ku-band radar observation provides the longest time series of ice sheet volume change, but the record is more difficult to convert into mass balance due to climate-induced variations in the surface penetration.</p><p>Here, we apply machine learning to build an empirical calibration method for converting the observed radar-derived volume change into mass balance. We train the machine learning model during the limited period of coinciding laser and radar satellite altimetry data (2003-2009). The radar and laser datasets are not sufficient to guide the empirical calibration alone. Hence, additional datasets are used to help build a stable predictor needed for radar calibration, such as ice velocity.  </p><p>We focus on the lessons learned from this machine learning approach but also highlight results from the resulting 28-yearlong time series of Greenland ice sheet mass balance. For example, the Greenland Ice Sheet contribution to global sea-level rise has been 12.1±2.3 mm since 1992, with more than 80% of this originating after 2003.</p>


2021 ◽  
Vol 46 (3) ◽  
pp. 251
Author(s):  
Urszula Woźnicka

The method of the semi-empirical calibration of a neutron well logging probe was developed by Jan Andrzej Czubek on the concept of the general neutron parameter (GNP) and tested positively at the neutron calibration station in Zielona Góra, Poland. The neutron probe responses in a wide range of neutron parameters (and thus lithology, porosity and saturation) were also computed using the Monte Carlo method. The obtained simulation results made it possible to determine the calibration curves using the Czubek concept in a wider range than by means of the original method. The very good compatibility of both methods confirms the applicability of the GNP as well as the Monte Carlo numerical experiments, which allow for a significant extension of the semi-empirical calibration in complex well geometries taking into account e.g., casing or invaded zones.


Sign in / Sign up

Export Citation Format

Share Document