incorrect model
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 16)

H-INDEX

8
(FIVE YEARS 2)

Author(s):  
Shashi lalvani ◽  
Lei Kerr ◽  
Shamal Lalvani ◽  
Dominic Olaguera-Delogu

Abstract A careful evaluation of the earlier model (1-2) for electrochemical frequency modulation (EFM) involving two sinusoidal applied potentials for the determination of corrosion parameters shows an algebraic error. Although the missing term in the original derivation appears to be insignificant, it is found that errors involved in corrosion current determination, and especially in evaluation of the Tafel slopes can be very significant, which is of consequence because of the rising popularity of this technique. The magnitude of error is found to be a function of the inherent corrosion characteristics (anodic and cathodic Tafel slopes) of the corroding material as well as the applied peak potential of the modulation. A corrected model with detailed steps showing the appropriate math is presented. In addition, using the experimental data available in the literature, the errors involved in estimating the corrosion parameters by the earlier EFM model of Bosch et al (1-2) are evaluated. The corrected corrosion current and the Tafel slopes can be recovered from the incorrect model without the benefit of the harmonic currents, as shown in this paper.The analysis is also presented for the case of only one applied sinusoidal frequency modulation, which offers several advantages over the multiple frequency modulation.


2021 ◽  
pp. 321-336
Author(s):  
Geoffrey Brooker

“Electrons and holes in semiconductors” applies band theory to electrons in the conduction band, and to holes in the valence band. Holes are described by an upside-down Fermi–Dirac distribution. An incorrect model (cinema-queue model) for a hole is discredited and replaced by a correct discussion. Both (conduction-band) electrons and (valence-band) holes are examples of quasi-particles. Both can be assigned their own group velocities and effective masses.


2021 ◽  
Author(s):  
Sneha Singh ◽  
Yann Capdeville ◽  
Heiner Igel ◽  
Navid Hedjazian ◽  
Thomas Bodin

<p>Wavefield gradient instruments, such as rotational sensors and DAS systems, are becoming more and more accessible in seismology. Their usage for Full Waveform Inversion (FWI) is in sight. Nevertheless, local small-scale heterogeneities, like geological inhomogeneities, surface topographies, and cavities are known to affect wavefield gradients. This effect is in fact measurable with current instruments. For example, the agreement between data and synthetics computed in a tomographic model is often not as good for rotation as it is for displacement.</p><p>The theory of homogenization can help us understand why small-scale heterogeneities strongly affect wavefield gradients, but not the wavefield itself. It tells us that at any receiver measuring wavefield gradient, small-scale heterogeneities cause the wavefield gradient to couple with strain through a coupling tensor <strong>J</strong>. Furthermore, this <strong>J</strong> is 1) independent of source, 2) independent of time, but 3) only dependent on the receiver location. Consequently, we can invert for <strong>J</strong> based on an effective model for which synthetics fit displacement data reasonably well. Once inverted, <strong>J</strong> can be used to correct all other wavefield gradients at that receiver.</p><p>Here, we aim to understand the benefits and drawbacks of wavefield gradient sensors in a FWI context. We show that FWIs performed with rotations and strains are equivalent to that performed with displacements provided that 1) the number of data is sufficient, and 2) the receivers are placed far away from heterogeneities. In the case that receivers are placed near heterogeneities, we find that due to the effect of these heterogeneities, an incorrect model is recovered from inversion. In this case therefore, the coupling tensor <strong>J</strong> needs to be taken into account for each receiver to get rid of the effect.</p>


2021 ◽  
Vol 16 (1) ◽  
pp. 73-99
Author(s):  
Paul Heidhues ◽  
Botond Koszegi ◽  
Philipp Strack

We establish convergence of beliefs and actions in a class of one‐dimensional learning settings in which the agent's model is misspecified, she chooses actions endogenously, and the actions affect how she misinterprets information. Our stochastic‐approximation‐based methods rely on two crucial features: that the state and action spaces are continuous, and that the agent's posterior admits a one‐dimensional summary statistic. Through a basic model with a normal–normal updating structure and a generalization in which the agent's misinterpretation of information can depend on her current beliefs in a flexible way, we show that these features are compatible with a number of specifications of how exactly the agent updates. Applications of our framework include learning by a person who has an incorrect model of a technology she uses or is overconfident about herself, learning by a representative agent who may misunderstand macroeconomic outcomes, and learning by a firm that has an incorrect parametric model of demand.


2020 ◽  
Vol 76 (10) ◽  
pp. 912-925
Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

Density modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here, the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows the incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between the macromolecule and solvent are visible, and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here, model bias is reduced by using ensemble models that allow an estimation of model uncertainty. A test of model bias is presented that suggests that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.


2020 ◽  
pp. 001316442094456
Author(s):  
Tenko Raykov ◽  
Christine DiStefano

The frequent practice of overall fit evaluation for latent variable models in educational and behavioral research is reconsidered. It is argued that since overall plausibility does not imply local plausibility and is only necessary for the latter, local misfit should be considered a sufficient condition for model rejection, even in the case of omnibus model tenability. The argument is exemplified with a comparison of the widely used one-parameter and two-parameter logistic models. A theoretically and practically relevant setting illustrates how discounting local fit and concentrating instead on overall model fit may lead to incorrect model selection, even if a popular information criterion is also employed. The article concludes with the recommendation for routine examination of particular parameter constraints within latent variable models as part of their fit evaluation.


Author(s):  
Thomas C. Terwilliger ◽  
Oleg V. Sobolev ◽  
Pavel V. Afonine ◽  
Paul D. Adams ◽  
Randy J. Read

AbstractDensity modification uses expectations about features of a map such as a flat solvent and expected distributions of density in the region of the macromolecule to improve individual Fourier terms representing the map. This process transfers information from one part of a map to another and can improve the accuracy of a map. Here the assumptions behind density modification for maps from electron cryomicroscopy are examined and a procedure is presented that allows incorporation of model-based information. Density modification works best in cases where unfiltered, unmasked maps with clear boundaries between macromolecule and solvent are visible and where there is substantial noise in the map, both in the region of the macromolecule and the solvent. It also is most effective if the characteristics of the map are relatively constant within regions of the macromolecule and the solvent. Model-based information can be used to improve density modification, but model bias can in principle occur. Here model bias is reduced by using ensemble models that allow estimation of model uncertainty. A test of model bias is presented suggesting that even if the expected density in a region of a map is specified incorrectly by using an incorrect model, the incorrect expectations do not strongly affect the final map.SynopsisThe prerequisites for density modification of maps from electron cryomicroscopy are examined and a procedure for incorporating model-based information is presented.


Author(s):  
Mikhail Pomazanov ◽  

This paper presents non-classical models for estimating and forecasting COVID-19 pandemic indices. These models have been successfully tested on country data where the pandemic is nearing completion. In particular, an effective algorithm for mortality index evaluation is also presented. This index is usually replaced by more simple estimates such as, for instance, „the number of deaths divided by the number of infected”; however, while the virus is at the stage of its rapid distribution, such superficial approaches are incorrect. Model indicators of the infection itself allow us to predict not only the apogee of the epidemic and the end of the quarantine period, but also the maximum number of infected people in some country (continent) during the height of the epidemic. The second part of the paper is devoted to an attempt to build regression models to explain (with using 100+ country socio-economic indicators taken from the World Bank data) the behavior of the epidemic spread indices. It is shown that the maximum number of infected people in the country is well predicted (R-square is close to 90%); and, moreover, migration indicators and the number of international air take-offs are effective regressors. Other indicators, for example, the mortality index, are difficultly modeled; nevertheless, it has a significant relationship with socio-economic factors. The presented paper might be valuable for making effective decisions to forestall some future pandemics or even the „second wave” of COVID-19.


2019 ◽  
Vol 37 (2) ◽  
pp. 549-562 ◽  
Author(s):  
Edward Susko ◽  
Andrew J Roger

Abstract The information criteria Akaike information criterion (AIC), AICc, and Bayesian information criterion (BIC) are widely used for model selection in phylogenetics, however, their theoretical justification and performance have not been carefully examined in this setting. Here, we investigate these methods under simple and complex phylogenetic models. We show that AIC can give a biased estimate of its intended target, the expected predictive log likelihood (EPLnL) or, equivalently, expected Kullback–Leibler divergence between the estimated model and the true distribution for the data. Reasons for bias include commonly occurring issues such as small edge-lengths or, in mixture models, small weights. The use of partitioned models is another issue that can cause problems with information criteria. We show that for partitioned models, a different BIC correction is required for it to be a valid approximation to a Bayes factor. The commonly used AICc correction is not clearly defined in partitioned models and can actually create a substantial bias when the number of parameters gets large as is the case with larger trees and partitioned models. Bias-corrected cross-validation corrections are shown to provide better approximations to EPLnL than AIC. We also illustrate how EPLnL, the estimation target of AIC, can sometimes favor an incorrect model and give reasons for why selection of incorrectly under-partitioned models might be desirable in partitioned model settings.


Sign in / Sign up

Export Citation Format

Share Document