scholarly journals Probabilistic skylines on uncertain data: model and bounding-pruning-refining methods

2010 ◽  
Vol 38 (1) ◽  
pp. 1-39 ◽  
Author(s):  
Bin Jiang ◽  
Jian Pei ◽  
Xuemin Lin ◽  
Yidong Yuan
Keyword(s):  
2014 ◽  
Vol 2014 ◽  
pp. 1-22 ◽  
Author(s):  
Dong Xie ◽  
Jie Xiao ◽  
Guangjun Guo ◽  
Tong Jiang

Radio Frequency Identification (RFID) is widely used to track and trace objects in traceability supply chains. However, massive uncertain data produced by RFID readers are not effective and efficient to be used in RFID application systems. Following the analysis of key features of RFID objects, this paper proposes a new framework for effectively and efficiently processing uncertain RFID data, and supporting a variety of queries for tracking and tracing RFID objects. We adjust different smoothing windows according to different rates of uncertain data, employ different strategies to process uncertain readings, and distinguish ghost, missing, and incomplete data according to their apparent positions. We propose a comprehensive data model which is suitable for different application scenarios. In addition, a path coding scheme is proposed to significantly compress massive data by aggregating the path sequence, the position, and the time intervals. The scheme is suitable for cyclic or long paths. Moreover, we further propose a processing algorithm for group and independent objects. Experimental evaluations show that our approach is effective and efficient in terms of the compression and traceability queries.


Data Mining ◽  
2013 ◽  
pp. 669-691 ◽  
Author(s):  
Evgeny Kharlamov ◽  
Pierre Senellart

This chapter deals with data mining in uncertain XML data models, whose uncertainty typically comes from imprecise automatic processes. We first review the literature on modeling uncertain data, starting with well-studied relational models and moving then to their semistructured counterparts. We focus on a specific probabilistic XML model, which allows representing arbitrary finite distributions of XML documents, and has been extended to also allow continuous distributions of data values. We summarize previous work on querying this uncertain data model and show how to apply the corresponding techniques to several data mining tasks, exemplified through use cases on two running examples.


Author(s):  
Evgeny Kharlamov ◽  
Pierre Senellart

This chapter deals with data mining in uncertain XML data models, whose uncertainty typically comes from imprecise automatic processes. We first review the literature on modeling uncertain data, starting with well-studied relational models and moving then to their semistructured counterparts. We focus on a specific probabilistic XML model, which allows representing arbitrary finite distributions of XML documents, and has been extended to also allow continuous distributions of data values. We summarize previous work on querying this uncertain data model and show how to apply the corresponding techniques to several data mining tasks, exemplified through use cases on two running examples.


Author(s):  
Orsolya Takács ◽  
◽  
Annamária R. Várkonyi-Kóczy

The model used to represent information during information processing could affect achievable accuracy and could determine the usability of different calculation methods. The data model must also be able to represent uncertainty and inaccuracy both of input data and results. The two most popular data models for representation of uncertain data is the "classical", probability based, and the recently introduced fuzzy data models. Both data models have their own calculation and data processing methods, but with the increasing complexity of calculation problems, a method for the mixed use of these data models is be needed. This paper deals with possible solutions for information processing based on mixed data models and examines the different conversion methods between fuzzy and probability theory based data models.


2008 ◽  
Author(s):  
Pedro J. M. Passos ◽  
Duarte Araujo ◽  
Keith Davids ◽  
Ana Diniz ◽  
Luis Gouveia ◽  
...  

2019 ◽  
Vol 13 (1-2) ◽  
pp. 95-115
Author(s):  
Brandon Plewe

Historical place databases can be an invaluable tool for capturing the rich meaning of past places. However, this richness presents obstacles to success: the daunting need to simultaneously represent complex information such as temporal change, uncertainty, relationships, and thorough sourcing has been an obstacle to historical GIS in the past. The Qualified Assertion Model developed in this paper can represent a variety of historical complexities using a single, simple, flexible data model based on a) documenting assertions of the past world rather than claiming to know the exact truth, and b) qualifying the scope, provenance, quality, and syntactics of those assertions. This model was successfully implemented in a production-strength historical gazetteer of religious congregations, demonstrating its effectiveness and some challenges.


Sign in / Sign up

Export Citation Format

Share Document