model misfit
Recently Published Documents


TOTAL DOCUMENTS

35
(FIVE YEARS 15)

H-INDEX

7
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Luise Fischer ◽  
Theresa Rohm ◽  
Claus H. Carstensen ◽  
Timo Gnambs

In the context of item response theory (IRT), linking the scales of two measurement points is a prerequisite to examine a change in competence over time. In educational large-scale assessments, non-identical test forms sharing a number of anchor-items are frequently scaled and linked using two− or three-parametric item response models. However, if item pools are limited and/or sample sizes are small to medium, the sparser Rasch model is a suitable alternative regarding the precision of parameter estimation. As the Rasch model implies stricter assumptions about the response process, a violation of these assumptions may manifest as model misfit in form of item discrimination parameters empirically deviating from their fixed value of one. The present simulation study investigated the performance of four IRT linking methods—fixed parameter calibration, mean/mean linking, weighted mean/mean linking, and concurrent calibration—applied to Rasch-scaled data with a small item pool. Moreover, the number of anchor items required in the absence/presence of moderate model misfit was investigated in small to medium sample sizes. Effects on the link outcome were operationalized as bias, relative bias, and root mean square error of the estimated sample mean and variance of the latent variable. In the light of this limited context, concurrent calibration had substantial convergence issues, while the other methods resulted in an overall satisfying and similar parameter recovery—even in the presence of moderate model misfit. Our findings suggest that in case of model misfit, the share of anchor items should exceed 20% as is currently proposed in the literature. Future studies should further investigate the effects of anchor item composition regarding unbalanced model misfit.


Geophysics ◽  
2021 ◽  
Vol 86 (3) ◽  
pp. E209-E224
Author(s):  
Daniele Colombo ◽  
Ersan Turkoglu ◽  
Weichang Li ◽  
Ernesto Sandoval-Curiel ◽  
Diego Rovetta

Machine learning, and specifically deep-learning (DL) techniques applied to geophysical inverse problems, is an attractive subject, which has promising potential and, at the same time, presents some challenges in practical implementation. Some obstacles relate to scarce knowledge of the searched geologic structures, a problem that can limit the interpretability and generalizability of the trained DL networks when applied to independent scenarios in real applications. Commonly used (physics-driven) least-squares optimization methods are very efficient local optimization techniques but require good starting models close to the correct solution to avoid local minima. We have developed a hybrid workflow that combines both approaches in a coupled physics-driven/DL inversion scheme. We exploit the benefits and characteristics of both inversion techniques to converge to solutions that typically outperform individual inversion results and bring the solution closer to the global minimum of a nonconvex inverse problem. The completely data-driven and self-feeding procedure relies on a coupling mechanism between the two inversion schemes taking the form of penalty functions applied to the model term. Predictions from the DL network are used to constrain the least-squares inversion, whereas the feedback loop from inversion to the DL scheme consists of the network retraining with partial results obtained from inversion. The self-feeding process tends to converge to a common agreeable solution, which is the result of two independent schemes with different mathematical formalisms and different objective functions on the data and model misfit. We determine that the hybrid procedure is converging to robust and high-resolution resistivity models when applied to the inversion of the synthetic and field transient electromagnetic data. Finally, we speculate that the procedure may be adopted to recast the way we solve inverse problems in several different disciplines.


2021 ◽  
Author(s):  
Adam Ciesielski ◽  
Thomas Forbriger

<p>Harmonic tidal analysis bases on the presumption that since short records and close frequencies result in an ill-conditioned matrix equation, a record of length T is required to distinguish harmonics with a frequency separation of 1/T (Rayleigh criterion). To achieve stability of the solution, tidal harmonics are grouped. Nevertheless, if any additional information from different harmonics within the assumed groups is present in the data, it cannot be resolved. While the most information in each group is carried by the harmonic with the largest amplitude, time series from other harmonics is properly taken into account in estimated amplitudes and phases. However, if the signal from the next largest harmonic in a group is significantly different from the expectation, the grouping parametrization might lead to an inaccurate estimate of tidal parameters. That might be an issue since harmonics in a group do not have the same admittance factor, or if the assumed relationship between harmonics degree 2 and 3 is false.</p><p>The bias caused by grouping tidal harmonics can be investigated with methods used for stabilizing inverse problem solutions. In our study, we abandon the concept of groups. The resulting ill-posedness of the problem is reduced by constraining the model parameters (1) to reference values and (2) to the condition that admittance shall be a smooth function of frequency. The mentioned regularization terms are present in the least-squares objective function, and the trade-off parameter between the model misfit and data residuals is chosen by the L-curve criterion. We demonstrate how this method may be used to reveal system properties hidden by wave grouping in tidal analysis. We also suggest that forcing time series amplitude may be more relevant grouping criterion than solely frequency closeness of harmonics.</p>


2021 ◽  
Author(s):  
Wouter Deleersnyder ◽  
Benjamin Maveau ◽  
David Dudal ◽  
Thomas Hermans

<p>In frequency domain Electromagnetic Induction (EMI) surveys, an image of the electrical conductivity of the subsurface is obtained non-invasively. The electrical conductivity can be related to important subsurface properties such as the porosity, saturation or water conductivity via Archie’s law. The advantage of geophysical EMI surveys is its cost-effectiveness because it is a non-contacting method, one can easily walk with the device or mount in on a vehicle or a helicopter (AEM).</p><p>The process of finding the conductivity profile from the collected field data is an ill-posed inverse problem. Regularization improves the stability of the inversion and, based on Occam’s razor principle, a smoothing constraint is typically used with a very large number of thin layers. However, the conductivity profiles are not always expected to be smooth. Another alternative is to use a predefined number of layers and to invert for their conductivity and thickness. This can yield sharp contrasts in conductivity. In practice however, the real underground might be either blocky or smooth, or somewhere in between. Those standard constraints are thus not always appropriate.</p><p>We develop a new minimum-structure inversion scheme in which we transform the model into the wavelet space and impose a sparsity constraint. This sparsity constrained inversion scheme minimizes an objective function with a least-squares data misfit and a sparsity measure of the model in the wavelet domain. With a solid understanding of wavelet theory, a novel and intuitive model misfit term was developed, allowing for both smooth and blocky models, depending on the chosen wavelet basis. A model in the wavelet domain has both temporal (i.e. low and high frequencies) and spatial resolution, and penalizing small-scale coefficients effectively reduces the complexity of the model.</p><p>Comparing the novel scale-dependent wavelet-based regularization scheme with wavelet-based regularization with no scale-dependence, revealed significantly better results (Figure A and B) w.r.t. the true model. Comparing with standard Tikhonov regularization (Figure C and D) shows that our scheme can recover high amplitude anomalies in combination with globally smooth profiles. Furthermore, the adaptive nature of the inversion method  (due to the choice of wavelet) allows for high flexibility because the shape of the wavelet can be exploited to generate multiple representations (smooth, blocky or intermediate) of the inverse model.</p><p><img src="https://contentmanager.copernicus.org/fileStorageProxy.php?f=gnp.c279d29567ff54198400161/sdaolpUECMynit/12UGE&app=m&a=0&c=ae78587f05a9ca0f7486a4013a5ef551&ct=x&pn=gnp.elif&d=1" alt="" width="646" height="438"></p><p>We have introduced an alternative inversion scheme for EMI surveys that can be extended to any other 1D geophysical method. It involves a new model misfit or regularization term based on the wavelet transform and scale-dependent weighting which can easily be combined with the existing framework of deterministic inversion (gradient-based optimization methods, L-curve criterion for optimal regularization parameter). A challenge remains to select the optimal wavelet, however, the ensemble of inversion results with different wavelets can also be used to qualitatively assess uncertainty.</p>


2021 ◽  
Author(s):  
Jihong Zhang ◽  
Jonathan Templin ◽  
Catherine E. Mintz

Posterior Predictive Model Checking (PPMC) is frequently used for model fit evaluation in Bayesian Confirmatory Factor Analysis (BCFA). In standard PPMC procedures, model misfit is quantified by the location of a ML-based estimate to the predictive distribution of a statistic for a model. When the ML-based point estimate is far away from the center of the density of the posterior predictive distribution, model fit is poor. One main critique of such standard PPMC procedures is the strong link to the ML-based point estimates of the observed data. Not included in this approach, however, is how variable the ML-based point estimates are and their use in general as the reference point for Bayesian analyses. We propose a new method of PPMC based on the Posterior Predictive distribution of Bayesian saturated model for BCFA models. The method uses the predictive distribution from parameters of the posterior distribution of the saturated model as reference to detect the local misfit of hypothesized models. The results of the simulation study suggest that the saturated model PPMC approach was an accurate method of determining local model misfit and could be used for model comparison. A real example is also provided in this study.


2020 ◽  
pp. 003329412097177
Author(s):  
Gudmundur T. Heimisson ◽  
Robert F. Dedrick

We used multigroup confirmatory factor analysis to evaluate the five-factor measurement model underlying the 50-item Irrational Beliefs Inventory (IBI) in samples of university students in the United States ( n=827) and Iceland ( n=720). Global model fit was marginally acceptable in each sample. Further analyses identified several sources of model misfit that included weak factor loadings, several item pairs with correlated errors, and items with loadings on more than one factor. Cronbach’s alpha reliability estimates for the five factors were similar for the U.S. and Icelandic samples, and comparable to those reported by the developers of the IBI. Measurement invariance testing supported configural (same form) and metric invariance (equal loadings), but identified only 20 items that had invariant item intercepts across the U.S. and Icelandic groups. Given the finding of partial measurement invariance, we offer caution when using the IBI to make group comparisons for U.S. and Icelandic samples. Recommendations are proposed for ongoing psychometric evaluations of the IBI that would identify strengths of the IBI and items that, if revised or deleted, may improve the quality of the measure for research and clinical purposes.


2020 ◽  
Vol 91 (5) ◽  
pp. 2817-2827 ◽  
Author(s):  
Noha Farghal ◽  
Andrew Barbour ◽  
John Langbein

Abstract We investigate the potential of using borehole strainmeter data from the Network of the Americas (NOTA) and the U.S. Geological Survey networks to estimate earthquake moment magnitudes for earthquake early warning (EEW) applications. We derive an empirical equation relating peak dynamic strain, earthquake moment magnitude, and hypocentral distance, and investigate the effects of different types of instrument calibration on model misfit. We find that raw (uncalibrated) strains fit the model as accurately as calibrated strains. We test the model by estimating moment magnitudes of the largest two earthquakes in the July 2019 Ridgecrest earthquake sequence—the M 6.4 foreshock and the M 7.1 mainshock—using two strainmeters located within ∼50  km of the rupture. In both the cases, the magnitude based on the dynamic strain component is within ∼0.1–0.4 magnitude units of the catalog moment magnitude. We then compare the temporal evolution of our strain-derived magnitudes for the largest two Ridgecrest events to the real-time performance of the ShakeAlert EEW System (SAS). The final magnitudes from NOTA borehole strainmeters are close to SAS real-time estimates for the M 6.4 foreshock, and significantly more accurate for the M 7.1 mainshock.


Author(s):  
Rosana Gulzar ◽  
Mansor H. Ibrahim ◽  
Mohamed Ariff

For the first time, this study investigates whether Islamic banks, in mimicking conventional banks, have become less stable than their theoretical equivalent, which is the cooperative banks in Europe. Theoretically the interest prohibition should have pushed Islamic banks towards mutuality and profit-sharing, which have been argued to be stabilising. In practice however, the banks are pushed for growth under a debt-driven commercial banking model which is not only antithetical to the Shariah but also destabilising. This may explain why the empirical findings are still divergent in Islamic banking stability studies. Our study employs system GMM to compare the stability of 37 Islamic banks against 1,536 cooperative banks in Europe during the 2008 crisis and post-non-crisis years. Interestingly, we found consistent and significant evidence that the Islamic banks are less stable than the cooperative banks in both macroeconomic conditions. This has significant policy implications, main of which is to steer reform efforts away from refurbishing Islamic commercial banks and towards building an entirely new Islamic cooperative bank, based on the model in Europe.


Sign in / Sign up

Export Citation Format

Share Document