scholarly journals An adaptive slicing approach for processing STL massive data model in batches based on layer merging

2021 ◽  
Vol 1884 (1) ◽  
pp. 012025
Author(s):  
Minghao Shao ◽  
Chao Wei ◽  
Bin Cui ◽  
Yongkang Li ◽  
Tengfei Zheng
2014 ◽  
Vol 2014 ◽  
pp. 1-22 ◽  
Author(s):  
Dong Xie ◽  
Jie Xiao ◽  
Guangjun Guo ◽  
Tong Jiang

Radio Frequency Identification (RFID) is widely used to track and trace objects in traceability supply chains. However, massive uncertain data produced by RFID readers are not effective and efficient to be used in RFID application systems. Following the analysis of key features of RFID objects, this paper proposes a new framework for effectively and efficiently processing uncertain RFID data, and supporting a variety of queries for tracking and tracing RFID objects. We adjust different smoothing windows according to different rates of uncertain data, employ different strategies to process uncertain readings, and distinguish ghost, missing, and incomplete data according to their apparent positions. We propose a comprehensive data model which is suitable for different application scenarios. In addition, a path coding scheme is proposed to significantly compress massive data by aggregating the path sequence, the position, and the time intervals. The scheme is suitable for cyclic or long paths. Moreover, we further propose a processing algorithm for group and independent objects. Experimental evaluations show that our approach is effective and efficient in terms of the compression and traceability queries.


2013 ◽  
Vol 321-324 ◽  
pp. 2514-2518
Author(s):  
Tian Xiang Zhu ◽  
Dan Zhang ◽  
Xin Liu ◽  
Guang Kun Ma

Massive data stored in a cloud database, in which lots of potential and valuable knowledge exist. In this paper, the data model of the cloud database is analyzed. Through analyzing, classifying, the common features of the data are extracted and form a feature data set, from which the new knowledge can be found. In the paper, the basic model based on the classification characteristic rules of the cloud database is defined, and the discovery algorithm of the classification characteristic rules is presented.


2008 ◽  
Author(s):  
Pedro J. M. Passos ◽  
Duarte Araujo ◽  
Keith Davids ◽  
Ana Diniz ◽  
Luis Gouveia ◽  
...  

2019 ◽  
Vol 13 (1-2) ◽  
pp. 95-115
Author(s):  
Brandon Plewe

Historical place databases can be an invaluable tool for capturing the rich meaning of past places. However, this richness presents obstacles to success: the daunting need to simultaneously represent complex information such as temporal change, uncertainty, relationships, and thorough sourcing has been an obstacle to historical GIS in the past. The Qualified Assertion Model developed in this paper can represent a variety of historical complexities using a single, simple, flexible data model based on a) documenting assertions of the past world rather than claiming to know the exact truth, and b) qualifying the scope, provenance, quality, and syntactics of those assertions. This model was successfully implemented in a production-strength historical gazetteer of religious congregations, demonstrating its effectiveness and some challenges.


MIS Quarterly ◽  
2013 ◽  
Vol 37 (1) ◽  
pp. 125-147 ◽  
Author(s):  
Rui Chen ◽  
◽  
Raj Sharman ◽  
H. Raghav Rao ◽  
Shambhu J. Upadhyaya ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document