scholarly journals Intensity Estimation for Poisson Process With Compositional Noise

Author(s):  
Glenna Schluck ◽  
Wei Wu ◽  
Anuj Srivastava

Intensity estimation for Poisson processes is a classical problem and has been extensively studied over the past few decades. Practical observations, however, often contain compositional noise, i.e., a non-linear shift along the time axis, which makes standard methods not directly applicable. The key challenge is that these observations are not “aligned,” and registration procedures are required for successful estimation. In this paper, we propose an alignment-based framework for positive intensity estimation. We first show that the intensity function is area-preserved with respect to compositional noise. Such a property implies that the time warping is only encoded in the normalized intensity, or density, function. Then, we decompose the estimation of the intensity by the product of the estimated total intensity and estimated density. The estimation of the density relies on a metric which measures the phase difference between two density functions. An asymptotic study shows that the proposed estimation algorithm provides a consistent estimator for the normalized intensity. We then extend the framework to estimating non-negative intensity functions. The success of the proposed estimation algorithms is illustrated using two simulations. Finally, we apply the new framework in a real data set of neural spike trains, and find that the newly estimated intensities provide better classification accuracy than previous methods.

2017 ◽  
Vol 14 (1) ◽  
pp. 172988141668567 ◽  
Author(s):  
Jing Li ◽  
Tao Yang ◽  
Jingyi Yu

Robust extraction of consensus sets from noisy data is a fundamental problem in robot vision. Existing multimodel estimation algorithms have shown success on large consensus sets estimations. One remaining challenge is to extract small consensus sets in cluttered multimodel data set. In this article, we present an effective multimodel extraction method to solve this challenge. Our technique is based on smallest consensus set random sampling, which we prove can guarantee to extract all consensus sets larger than the smallest set from input data. We then develop an efficient model competition scheme that iteratively removes redundant and incorrect model samplings. Extensive experiments on both synthetic data and real data with high percentage of outliers and multimodel intersections demonstrate the superiority of our method.


Geophysics ◽  
2005 ◽  
Vol 70 (3) ◽  
pp. P13-P18 ◽  
Author(s):  
Wenkai Lu ◽  
Yandong Li ◽  
Shanwen Zhang ◽  
Huanqin Xiao ◽  
Yanda Li

This article proposes a new higher-order-statistics-based coherence-estimation algorithm, which we denote as HOSC. Unlike the traditional crosscorrelation-based C1 coherence algorithm, which sequentially estimates correlation in the inline and crossline directions and uses their geometric mean as a coherence estimate at the analysis point, our method exploits three seismic traces simultaneously to calculate a 2D slice of their normalized fourth-order moment with one zero-lag correlation and then searches for the maximum correlation point on the 2D slice as the coherence estimate. To include more seismic traces in the coherence estimation, we introduce a supertrace technique that constructs a new data cube by rearranging several adjacent seismic traces into a single supertrace. Combining our supertrace technique with the C1 and HOSC algorithms, we obtain two efficient coherence-estimation algorithms, which we call ST-C1 and ST-HOSC. Application results on the real data set show that our algorithms are able to reveal more details about the structural and stratigraphic features than the traditional C1 algorithm, yet still preserve its computational efficiency.


2016 ◽  
Vol 47 (2) ◽  
pp. 207-239 ◽  
Author(s):  
Aurea Grané ◽  
Rosario Romera

Survey data are usually of mixed type (quantitative, multistate categorical, and/or binary variables). Multidimensional scaling (MDS) is one of the most extended methodologies to visualize the profile structure of the data. Since the past 60s, MDS methods have been introduced in the literature, initially in publications in the psychometrics area. Nevertheless, sensitivity and robustness of MDS configurations have been topics scarcely addressed in the specialized literature. In this work, we are interested in the construction of robust profiles for mixed-type data using a proper MDS configuration. To this end, we propose to compare different MDS configurations (coming from different metrics) through a combination of sensitivity and robust analysis. In particular, as an alternative to classical Gower’s metric, we propose a robust joint metric combining different distance matrices, avoiding redundant information, via related metric scaling. The search for robustness and identification of outliers is done through a distance-based procedure related to geometric variability notions. In this sense, we propose a statistic for detecting multivariate outliers in the context of mixed-type data and evaluate its performance through a simulation study. Finally, we apply these techniques to a real data set provided by the largest humanitarian organization involved in social programs in Spain, where we are able to find in a robust way the most relevant factors defining the profiles of people that were under risk of being socially excluded in the beginning of the 2008 economic crisis.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. V61-V66 ◽  
Author(s):  
Yandong Li ◽  
Wenkai Lu ◽  
Huanqin Xiao ◽  
Shanwen Zhang ◽  
Yanda Li

The eigenstructure-based coherence algorithms are robust to noise and able to produce enhanced coherence images. However, the original eigenstructure coherence algorithm does not implement dip scanning; therefore, it produces less satisfactory results in areas with strong structural dips. The supertrace technique also improves the coherence algorithms’ robustness by concatenating multiple seismic traces to form a supertrace. In addition, the supertrace data cube preserves the structural-dip information that is contained in the original seismic data cube; thus, dip scanning can be performed effectively using a number of adjacent supertraces. We combine the eigenstructure analysis and the dip-scanning supertrace technique to obtain a new coherence-estimation algorithm. Application to the real data set shows that the new algorithm provides good coherence estimates in areas with strong structural dips. Furthermore, the algorithm is computationally efficient because of the small covariance matrix [Formula: see text] used for the eigenstructure analysis.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


2021 ◽  
Vol 1978 (1) ◽  
pp. 012047
Author(s):  
Xiaona Sheng ◽  
Yuqiu Ma ◽  
Jiabin Zhou ◽  
Jingjing Zhou

2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


Sign in / Sign up

Export Citation Format

Share Document