scholarly journals Probabilistic clustering of interval data

2015 ◽  
Vol 19 (2) ◽  
pp. 293-313 ◽  
Author(s):  
Paula Brito ◽  
A. Pedro Duarte Silva ◽  
José G. Dias
2012 ◽  
Vol 38 (7) ◽  
pp. 1190 ◽  
Author(s):  
Yu PENG ◽  
Qing-Hua LUO ◽  
Dan WANG ◽  
Xi-Yuan PENG

Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 154
Author(s):  
Anderson Fonseca ◽  
Paulo Henrique Ferreira ◽  
Diego Carvalho do Nascimento ◽  
Rosemeire Fiaccone ◽  
Christopher Ulloa-Correa ◽  
...  

Statistical monitoring tools are well established in the literature, creating organizational cultures such as Six Sigma or Total Quality Management. Nevertheless, most of this literature is based on the normality assumption, e.g., based on the law of large numbers, and brings limitations towards truncated processes as open questions in this field. This work was motivated by the register of elements related to the water particles monitoring (relative humidity), an important source of moisture for the Copiapó watershed, and the Atacama region of Chile (the Atacama Desert), and presenting high asymmetry for rates and proportions data. This paper proposes a new control chart for interval data about rates and proportions (symbolic interval data) when they are not results of a Bernoulli process. The unit-Lindley distribution has many interesting properties, such as having only one parameter, from which we develop the unit-Lindley chart for both classical and symbolic data. The performance of the proposed control chart is analyzed using the average run length (ARL), median run length (MRL), and standard deviation of the run length (SDRL) metrics calculated through an extensive Monte Carlo simulation study. Results from the real data applications reveal the tool’s potential to be adopted to estimate the control limits in a Statistical Process Control (SPC) framework.


Author(s):  
Jianglin Feng ◽  
Nathan C Sheffield

Abstract Summary Databases of large-scale genome projects now contain thousands of genomic interval datasets. These data are a critical resource for understanding the function of DNA. However, our ability to examine and integrate interval data of this scale is limited. Here, we introduce the integrated genome database (IGD), a method and tool for searching genome interval datasets more than three orders of magnitude faster than existing approaches, while using only one hundredth of the memory. IGD uses a novel linear binning method that allows us to scale analysis to billions of genomic regions. Availability https://github.com/databio/IGD


2000 ◽  
Author(s):  
Alan D. Kalvin ◽  
Bernice E. Rogowitz ◽  
Adar Pelah ◽  
Aron Cohen
Keyword(s):  

2009 ◽  
Author(s):  
Yow-Jen Jou ◽  
Chien-Chia Huang ◽  
Jennifer Yuh-Jen Wu ◽  
George Maroulis ◽  
Theodore E. Simos

2016 ◽  
Vol 2016 ◽  
pp. 1-11
Author(s):  
Berlin Wu ◽  
Chin Feng Hung

Correlation coefficients are commonly found with crisp data. In this paper, we use Pearson’s correlation coefficient and propose a method for evaluating correlation coefficients for fuzzy interval data. Our empirical studies involve the relationship between mathematics achievement and other projects.


Sign in / Sign up

Export Citation Format

Share Document