scholarly journals DETECTING CONVERGENCE CLUBS

2018 ◽  
Vol 24 (3) ◽  
pp. 629-669 ◽  
Author(s):  
Fuat C. Beylunioğlu ◽  
M. Ege Yazgan ◽  
Thanasis Stengos

The convergence hypothesis, which is developed in the context of growth economics, asserts that the income differences across countries are transitory, and developing countries will eventually attain the level of income of developed ones. On the other hand, convergence clubs hypothesis claim that the convergence can only be realized across groups of countries that share some common characteristics. In this study, we propose a new method to find convergence clubs that combines a pairwise method of testing convergence with maximum clique and maximal clique algorithms. Unlike many of those already developed in the literature, this new method aims to find convergence clubs endogenously without depending on a-priori classifications. In a Monte Carlo simulation study, the success of the method in finding convergence clubs is compared with a similar algorithm. Simulation results indicated that the proposed method perform better than the compared algorithm in most cases. In addition to the Monte Carlo, a new empirical evidence on the existence of convergence clubs is presented in the context of real data applications.

2020 ◽  
Vol 9 (1) ◽  
pp. 47-60
Author(s):  
Samir K. Ashour ◽  
Ahmed A. El-Sheikh ◽  
Ahmed Elshahhat

In this paper, the Bayesian and non-Bayesian estimation of a two-parameter Weibull lifetime model in presence of progressive first-failure censored data with binomial random removals are considered. Based on the s-normal approximation to the asymptotic distribution of maximum likelihood estimators, two-sided approximate confidence intervals for the unknown parameters are constructed. Using gamma conjugate priors, several Bayes estimates and associated credible intervals are obtained relative to the squared error loss function. Proposed estimators cannot be expressed in closed forms and can be evaluated numerically by some suitable iterative procedure. A Bayesian approach is developed using Markov chain Monte Carlo techniques to generate samples from the posterior distributions and in turn computing the Bayes estimates and associated credible intervals. To analyze the performance of the proposed estimators, a Monte Carlo simulation study is conducted. Finally, a real data set is discussed for illustration purposes.


2019 ◽  
Vol 492 (1) ◽  
pp. 589-602 ◽  
Author(s):  
A Fienga ◽  
C Avdellidou ◽  
J Hanuš

ABSTRACT In this paper, we present masses of 103 asteroids deduced from their perturbations on the orbits of the inner planets, in particular Mars and the Earth. These determinations and the INPOP19a planetary ephemerides are improved by the recent Mars orbiter navigation data and the updated orbit of Jupiter based on the Juno mission data. More realistic mass estimates are computed by a new method based on random Monte Carlo sampling that uses up-to-date knowledge of asteroid bulk densities. We provide masses with uncertainties better than 33${{\ \rm per\ cent}}$ for 103 asteroids. Deduced bulk densities are consistent with those observed within the main spectroscopic complexes.


2021 ◽  
Vol 6 (10) ◽  
pp. 10789-10801
Author(s):  
Tahani A. Abushal ◽  

<abstract><p>In this paper, the problem of estimating the parameter of Akash distribution applied when the lifetime of the product follow Type-Ⅱ censoring. The maximum likelihood estimators (MLE) are studied for estimating the unknown parameter and reliability characteristics. Approximate confidence interval for the parameter is derived under the s-normal approach to the asymptotic distribution of MLE. The Bayesian inference procedures have been developed under the usual error loss function through Lindley's technique and Metropolis-Hastings algorithm. The highest posterior density interval is developed by using Metropolis-Hastings algorithm. Finally, the performances of the different methods have been compared through a Monte Carlo simulation study. The application to set of real data is also analyzed using proposed methods.</p></abstract>


2020 ◽  
Author(s):  
Agnes Fienga ◽  
Chrysa Avdellidou ◽  
Josef Hanus

&lt;p&gt;We present here masses of 103 asteroids deduced from their perturbations on the&lt;br /&gt;orbits of the inner planets, in particular Mars and the Earth. These determinations and the&lt;br /&gt;INPOP19a planetary ephemerides are improved by the recent Mars orbiter navigation data&lt;br /&gt;and the updated orbit of Jupiter based on the Juno mission data. More realistic mass estimates&lt;br /&gt;are computed by a new method based on random Monte-Carlo sampling that uses up-to-date&lt;br /&gt;knowledge of asteroid bulk densities. We provide masses with uncertainties better than 33%&lt;br /&gt;for 103 asteroids. Deduced bulk densities are consistent with those observed within the main&lt;br /&gt;spectroscopic complexes.&lt;/p&gt;


10.29007/3sdd ◽  
2020 ◽  
Author(s):  
Yuping Lu ◽  
Charles Phillips ◽  
Elissa Chesler ◽  
Michael Langston

The paraclique algorithm provides an effective means for biological data clustering. It satisfies the mathematical quest for density, while fulfilling the pragmatic need for noise abatement on real data. Given a finite, simple, edge-weighted and thresholded graph, the paraclique method first finds a maximum clique, then incorporates additional vertices in a controlled manner, and finally extracts the subgraph thereby defined. When more than one maximum clique is present, however, deciding which to employ is usually left unspecified. In practice, this frequently and quite naturally reduces to using the first maximum clique found. In this paper, maximum clique selection is studied in the context of well-annotated transcriptomic data, with ontological classification used as a proxy for cluster quality. Enrichment p-values are compared using maximum cliques chosen in a variety of ways. The most appealing and intuitive option is almost surely to start with the maximum clique having the highest average edge weight. Although there is of course no guarantee that such a strategy is any better than random choice, results derived from a large collection of experiments indicate that, in general, this approach produces a small but statistically significant improvement in overall cluster quality. Such an improvement, though modest, may be well worth pursuing in light of the time, expense and expertise often required to generate timely, high quality, high throughput biological data.


2017 ◽  
Vol 0 (0) ◽  
Author(s):  
Fuat C. Beylunioglu ◽  
Thanasis Stengos ◽  
M. Ege Yazgan

AbstractIn this study, we propose a new method to find convergence clubs that combine pairwise method of testing convergence with maximal clique algorithm. Unlike many of those already developed in the literature, this new method aims to find convergence clubs endogenously without depending on priori classifications. We use our method to study convergence among different capital markets as captured by their respective indices. Stock market convergence would indicate the absence of arbitrage opportunities in moving between the different markets as they would all present investors with similar risks. Furthermore, stock market convergence would be a precursor to GDP convergence as these economies would be bound by similar (possibly unobservable) common factors that affect long run macroeconomic performance.


2002 ◽  
Vol 11 (5) ◽  
pp. 474-492 ◽  
Author(s):  
Lin Chai ◽  
William A. Hoff ◽  
Tyrone Vincent

A new method for registration in augmented reality (AR) was developed that simultaneously tracks the position, orientation, and motion of the user's head, as well as estimating the three-dimensional (3D) structure of the scene. The method fuses data from head-mounted cameras and head-mounted inertial sensors. Two extended Kalman filters (EKFs) are used: one estimates the motion of the user's head and the other estimates the 3D locations of points in the scene. A recursive loop is used between the two EKFs. The algorithm was tested using a combination of synthetic and real data, and in general was found to perform well. A further test showed that a system using two cameras performed much better than a system using a single camera, although improving the accuracy of the inertial sensors can partially compensate for the loss of one camera. The method is suitable for use in completely unstructured and unprepared environments. Unlike previous work in this area, this method requires no a priori knowledge about the scene, and can work in environments in which the objects of interest are close to the user.


2005 ◽  
Vol 17 (6) ◽  
pp. 1385-1410 ◽  
Author(s):  
Faming Liang

Bayesian neural networks play an increasingly important role in modeling and predicting nonlinear phenomena in scientific computing. In this article, we propose to use the contour Monte Carlo algorithm to evaluate evidence for Bayesian neural networks. In the new method, the evidence is dynamically learned for each of the models. Our numerical results show that the new method works well for both the regression and classification multilayer perceptrons. It often leads to an improved estimate, in terms of overall accuracy, for the evidence of multiple MLPs in comparison with the reversible-jump Markov chain Monte Carlo method and the gaussian approximation method. For the simulated data, it can identify the true models, and for the real data, it can produce results consistent with those published in the literature.


2013 ◽  
Vol 2013 ◽  
pp. 1-10 ◽  
Author(s):  
Alessandro Barbiero

The type III discrete Weibull distribution can be used in reliability analysis for modeling failure data such as the number of shocks, cycles, or runs a component or a structure can overcome before failing. This paper describes three methods for estimating its parameters: two customary techniques and a technique particularly suitable for discrete distributions, which, in contrast to the two other techniques, provides analytical estimates, whose derivation is detailed here. The techniques’ peculiarities and practical limits are outlined. A Monte Carlo simulation study has been performed to assess the statistical performance of these methods for different parameter combinations and sample sizes and then give some indication for their mindful use. Two applications of real data are provided with the aim of showing how the type III discrete Weibull distribution can fit real data, even better than other popular discrete models, and how the inferential procedures work. A software implementation of the model is also provided.


2019 ◽  
Vol 23 (Suppl. 6) ◽  
pp. 1839-1847
Author(s):  
Caner Tanis ◽  
Bugra Saracoglu

In this paper, it is considered the problem of estimation of unknown parameters of log-Kumaraswamy distribution via Monte-Carlo simulations. Firstly, it is described six different estimation methods such as maximum likelihood, approximate bayesian, least-squares, weighted least-squares, percentile, and Cramer-von-Mises. Then, it is performed a Monte-Carlo simulation study to evaluate the performances of these methods according to the biases and mean-squared errors of the estimators. Furthermore, two real data applications based on carbon fibers and the gauge lengths are presented to compare the fits of log-Kumaraswamy and other fitted statistical distributions.


Sign in / Sign up

Export Citation Format

Share Document