A Multistep Iterative Proportional Fitting Procedure to Estimate Cohortwise Interregional Migration Tables Where Only Inconsistent Marginals are Known

1992 ◽  
Vol 24 (11) ◽  
pp. 1531-1547 ◽  
Author(s):  
S Saito

The interregional cohort survival model developed by Rogers is an excellent one that includes all of the three population processes: birth-death, aging, and interregional migration. Rogers's model, however, has been rarely implemented because it requires detailed data concerning cohortwise (that is by age and sex) interregional migration tables which are not usually available as published data. The most usual case is that only marginal tables that are from different sources can be obtained. However, in this case, those marginal tables necessarily show inconsistency in the sense that they do not have identical common submarginals. This inconsistency prevents the standard iterative proportional fitting (IPF) procedure from converging to the estimate of the complete migration table which conforms to the given observed marginals. Thus to implement Rogers's model some method is needed to estimate the complete migration table where only inconsistent marginals are available. In this paper a multistep IPF procedure is proposed for that purpose and an actual application of the proposed method is shown. The multistep IPF procedure has universal applicability to a wide class of general problems concerned with the estimation of a joint table under inconsistent marginals.

Author(s):  
YUN PENG ◽  
ZHONGLI DING ◽  
SHENYONG ZHANG ◽  
RONG PAN

This paper deals with an important probabilistic knowledge integration problem: revising a Bayesian network (BN) to satisfy a set of probability constraints representing new or more specific knowledge. We propose to solve this problem by adopting IPFP (iterative proportional fitting procedure) to BN. The resulting algorithm E-IPFP integrates the constraints by only changing the conditional probability tables (CPT) of the given BN while preserving the network structure; and the probability distribution of the revised BN is as close as possible to that of the original BN. Two variations of E-IPFP are also proposed: 1) E-IPFP-SMOOTH which deals with the situation where the probabilistic constraints are inconsistent with each other or with the network structure of the given BN; and 2) D-IPFP which reduces the computational cost by decomposing a global E-IPFP into a set of smaller local E-IPFP problems.


Author(s):  
Н.А. Бекин

The rate of multiphonon relaxation of 1s(T2) level in Se+ donors in silicon was estimated. The calculation is an initial approach to the problem, which uses the most simplified form of wave functions. For the probability of transition, we used a well-known expression from the literature by R. Pässler [R. Pässler. Czech. J. Phys. B, 24, 322 (1974)], obtained in the framework of the so-called “static coupling scheme”. The deformation potentials of optical and acoustic phonons were determined by a fitting procedure using published data on the luminescence spectrum of Se+ donors at the 1s(T2) — 1s(A1) transition and the Franck-Condon principle. The resulting estimate for the relaxation rate, 103 s-1, was five orders of magnitude less than the rate corresponding to the experimentally measured lifetime. The reason for the discrepancy with the experiment is an oversimplified model that does not take into account several factors, the main of which is the presence of quasi-local vibrational modes. Analysis of the luminescence spectrum at this transition leads to the conclusion that the energies of such vibrational modes lie in the range from 26 to 61 meV. For a satisfactory agreement with the experiment, it is necessary to complicate the model, taking into account the interaction with these modes.


2003 ◽  
Vol 125 (4) ◽  
pp. 736-739 ◽  
Author(s):  
Chakguy Prakasvudhisarn ◽  
Theodore B. Trafalis ◽  
Shivakumar Raman

Probe-type Coordinate Measuring Machines (CMMs) rely on the measurement of several discrete points to capture the geometry of part features. The sampled points are then fit to verify a specified geometry. The most widely used fitting method, the least squares fit (LSQ), occasionally overestimates the tolerance zone. This could lead to the economical disadvantage of rejecting some good parts and the statistical disadvantage of normal (Gaussian) distribution assumption. Support vector machines (SVMs) represent a relatively new revolutionary approach for determining the approximating function in regression problems. Its upside is that the normal distribution assumption is not required. In this research, support vector regression (SVR), a new data fitting procedure, is introduced as an accurate method for finding the minimum zone straightness and flatness tolerances. Numerical tests are conducted with previously published data and the results are found to be comparable to the published results, illustrating its potential for application in precision data analysis such as used in minimum zone estimation.


Author(s):  
Satish Sundar ◽  
Zvi Shiller

Abstract A design method for selecting system parameters of multi-degree-of-freedom mechanisms for near minimum time motions along specified paths is presented. The time optimization problem is approximated by a simple curve fitting procedure that fits, what we call, the acceleration lines to the given path. The approximate cost function is explicit in the design parameters, facilitating the formulation of the design problem as a constrained optimization. Examples for optimizing the dimensions of a five-bar planar mechanism demonstrate close correlation between the approximate and the exact solutions and better computational efficiency than the previous unconstrained optimization methods.


Author(s):  
Usman Akhtar ◽  
Mehdi Hassan

The availability of a huge amount of heterogeneous data from different sources to the Internet has been termed as the problem of Big Data. Clustering is widely used as a knowledge discovery tool that separate the data into manageable parts. There is a need of clustering algorithms that scale on big databases. In this chapter we have explored various schemes that have been used to tackle the big databases. Statistical features have been extracted and most important and relevant features have been extracted from the given dataset. Reduce and irrelevant features have been eliminated and most important features have been selected by genetic algorithms (GA).Clustering with reduced feature sets requires lower computational time and resources. Experiments have been performed at standard datasets and results indicate that the proposed scheme based clustering offers high clustering accuracy. To check the clustering quality various quality measures have been computed and it has been observed that the proposed methodology results improved significantly. It has been observed that the proposed technique offers high quality clustering.


Web Services ◽  
2019 ◽  
pp. 413-430
Author(s):  
Usman Akhtar ◽  
Mehdi Hassan

The availability of a huge amount of heterogeneous data from different sources to the Internet has been termed as the problem of Big Data. Clustering is widely used as a knowledge discovery tool that separate the data into manageable parts. There is a need of clustering algorithms that scale on big databases. In this chapter we have explored various schemes that have been used to tackle the big databases. Statistical features have been extracted and most important and relevant features have been extracted from the given dataset. Reduce and irrelevant features have been eliminated and most important features have been selected by genetic algorithms (GA). Clustering with reduced feature sets requires lower computational time and resources. Experiments have been performed at standard datasets and results indicate that the proposed scheme based clustering offers high clustering accuracy. To check the clustering quality various quality measures have been computed and it has been observed that the proposed methodology results improved significantly. It has been observed that the proposed technique offers high quality clustering.


2015 ◽  
Vol 41 (8) ◽  
pp. 754-772 ◽  
Author(s):  
Dionisis Philippas ◽  
Yiannis Koutelidakis ◽  
Alexandros Leontitsis

Purpose – The purpose of this paper is to analyse the importance of interbank connections and shocks on banks’ capital ratios to financial stability by looking at a network comprising a large number of European and UK banks. Design/methodology/approach – The authors model interbank contagion using insights from the Susceptible Infected Recovered model. The authors construct scale-free networks with preferential attachment and growth, applying simulated interbank data to capture the size and scale of connections in the network. The authors proceed to shock these networks per country and perform Monte Carlo simulations to calculate mean total losses and duration of infection. Finally, the authors examine the effects of contagion in terms of Core Tier 1 Capital Ratios for the affected banking systems. Findings – The authors find that shocks in smaller banking systems may cause smaller overall losses but tend to persist longer, leading to important policy implications for crisis containment. Originality/value – The authors infer the interbank domestic and cross-border exposures of banks employing an iterative proportional fitting procedure, called the RAS algorithm. The authors use an extend sample of 169 European banks, that also captures effects on the UK as well as the Eurozone interbank markets. Finally, the authors provide evidence of the contagion effect on each bank by allowing heterogeneity. The authors compare the bank’s relative financial strength with the contagion effect which is modelled by the number and the volume of bilateral connections.


Sign in / Sign up

Export Citation Format

Share Document