scholarly journals Distributed Processing of Location-Based Aggregate Queries Using MapReduce

2019 ◽  
Vol 8 (9) ◽  
pp. 370
Author(s):  
Yuan-Ko Huang

The location-based aggregate queries, consisting of the shortest average distance query (SAvgDQ), the shortest minimal distance query (SMinDQ), the shortest maximal distance query (SMaxDQ), and the shortest sum distance query (SSumDQ) are new types of location-based queries. Such queries can be used to provide the user with useful object information by considering both the spatial closeness of objects to the query object and the neighboring relationship between objects. Due to a large amount of location-based aggregate queries that need to be evaluated concurrently, the centralized processing system would suffer a heavy query load, leading eventually to poor performance. As a result, in this paper, we focus on developing the distributed processing technique to answer multiple location-based aggregate queries, based on the MapReduce platform. We first design a grid structure to manage information of objects by taking into account the storage balance, and then develop a distributed processing algorithm, namely the MapReduce-based aggregate query algorithm (MRAggQ algorithm), to efficiently process the location-based aggregate queries in a distributed manner. Extensive experiments using synthetic and real datasets are conducted to demonstrate the scalability and the efficiency of the proposed processing algorithm.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4679
Author(s):  
Yoon-Su Jeong

As IoT (Internet of Things) devices are diversified in the fields of use (manufacturing, health, medical, energy, home, automobile, transportation, etc.), it is becoming important to analyze and process data sent and received from IoT devices connected to the Internet. Data collected from IoT devices is highly dependent on secure storage in databases located in cloud environments. However, storing directly in a database located in a cloud environment makes it not only difficult to directly control IoT data, but also does not guarantee the integrity of IoT data due to a number of hazards (error and error handling, security attacks, etc.) that can arise from natural disasters and management neglect. In this paper, we propose an optimized hash processing technique that enables hierarchical distributed processing with an n-bit-size blockchain to minimize the loss of data generated from IoT devices deployed in distributed cloud environments. The proposed technique minimizes IoT data integrity errors as well as strengthening the role of intermediate media acting as gateways by interactively authenticating blockchains of n bits into n + 1 and n − 1 layers to normally validate IoT data sent and received from IoT data integrity errors. In particular, the proposed technique ensures the reliability of IoT information by validating hash values of IoT data in the process of storing index information of IoT data distributed in different locations in a blockchain in order to maintain the integrity of the data. Furthermore, the proposed technique ensures the linkage of IoT data by allowing minimal errors in the collected IoT data while simultaneously grouping their linkage information, thus optimizing the load balance after hash processing. In performance evaluation, the proposed technique reduced IoT data processing time by an average of 2.54 times. Blockchain generation time improved on average by 17.3% when linking IoT data. The asymmetric storage efficiency of IoT data according to hash code length is improved by 6.9% on average over existing techniques. Asymmetric storage speed according to the hash code length of the IoT data block was shown to be 10.3% faster on average than existing techniques. Integrity accuracy of IoT data is improved by 18.3% on average over existing techniques.


2021 ◽  
Vol 2021 (06) ◽  
pp. 0626
Author(s):  
Conrad Dale Johnson

This essay extends the argument begun in "Why Quantum Mechanics Makes Sense," exploring the conditions under which a physical world can define and communicate information. I argue that like the structure of quantum physics, the principles of Special and General Relativity can be understood as reflecting the requirements of a universe in which things are observable and measurable. I interpret the peculiar hyperbolic structure of spacetime not as the static, four-dimensional geometry of an unobservable "block universe", but as the background metric of an evolving web of communicated information that we, along with all our measuring instruments and recording devices, actually experience in our local "here and now." Our relativistic universe is conceived as a parallel distributed processing system, in which a common objective reality is constantly being woven out of many kinds of facts determined separately in countless local measurement-contexts.


Author(s):  
Chen Xu ◽  
Xueyan Xiong ◽  
Qianyi Du ◽  
Shudong Liu ◽  
Yipeng Li ◽  
...  

Track guidance vehicle (RGV) is widely used in logistics warehousing and intelligent workshop, and its scheduling effectiveness will directly affect the production and operation efficiency of enterprises. In practical operation, central information system often lacks flexibility and timeliness. By contrast, mobile computing can balance the central information system and the distributed processing system, so that useful, accurate, and timely information can be provided to RGV. In order to optimize the RGV scheduling problem in uncertain environment, a genetic algorithm scheduling rule (GAM) using greedy algorithm as the genetic screening criterion is proposed in this paper. In the experiment, RGV scheduling of two-step processing in an intelligent workshop is selected as the research object. The experimental results show that the GAM model can carry out real-time dynamic programming, and the optimization efficiency is remarkable before a certain threshold.


2008 ◽  
pp. 1250-1268
Author(s):  
Cyrus Shahabi ◽  
Mehrdad Jahangiri ◽  
Dimitris Sacharidis

Data analysis systems require range-aggregate query answering of large multidimensional datasets. We provide the necessary framework to build a retrieval system capable of providing fast answers with progressively increasing accuracy in support of range-aggregate queries. In addition, with error forecasting, we provide estimations on the accuracy of the generated approximate results. Our framework utilizes the wavelet transformation of query and data hypercubes. While prior work focused on the ordering of either the query or the data coefficients, we propose a class of hybrid ordering techniques that exploits both query and data wavelets in answering queries progressively. This work effectively subsumes and extends most of the current work where wavelets are used as a tool for approximate or progressive query evaluation. The results of our experimental studies show that independent of the characteristics of the dataset, the data coefficient ordering, contrary to the common belief, is the inferior approach. Hybrid ordering, on the other hand, performs best for scientific datasets that are inter-correlated. For an entirely random dataset with no inter-correlation, query ordering is the superior approach.


1996 ◽  
Vol 2 (3) ◽  
pp. 240-248 ◽  
Author(s):  
Michael R. Polster ◽  
Steven Z. Rapcsak

AbstractWe report the performance of a prosopagnosic patient on face learning tasks under different encoding instructions (i.e., levels of processing manipulations). R.J. performs at chance when given no encoding instructions or when given “shallow” encoding instructions to focus on facial features. By contrast, he performs relatively well with “deep” encoding instructions to rate faces in terms of personality traits or when provided with semantic and name information during the study phase. We propose that the improvement associated with deep encoding instructions may be related to the establishment of distinct visually derived and identity-specific semantic codes. The benefit associated with deep encoding in R.J., however, was found to be restricted to the specific view of the face presented at study and did not generalize to other views of the same face. These observations suggest that deep encoding instructions may enhance memory for concrete or pictorial representations of faces in patients with prosopagnosia, but that these patients cannot compensate for the inability to construct abstract structural codes that normally allow faces to be recognized from different orientations. We postulate further that R.J.'s poor performance on face learning tasks may be attributable to excessive reliance on a feature-based left hemisphere face processing system that operates primarily on view-specific representations. (JINS, 1996, 2, 240–248.)


Sign in / Sign up

Export Citation Format

Share Document