CH2: A Hybrid Operational/Analytical Processing Benchmark for NoSQL

Author(s):  
Michael Carey ◽  
Dmitry Lychagin ◽  
M. Muralikrishna ◽  
Vijay Sarathy ◽  
Till Westmann
2019 ◽  
Vol 148 (9) ◽  
pp. 1505-1516
Author(s):  
Ivan I. Ivanchei ◽  
Nadezhda Moroshkina ◽  
Roman Tikhonov ◽  
Irina Ovchinnikova

Author(s):  
Harkiran Kaur ◽  
Kawaljeet Singh ◽  
Tejinder Kaur

Background: Numerous E – Migrants databases assist the migrants to locate their peers in various countries; hence contributing largely in communication of migrants, staying overseas. Presently, these traditional E – Migrants databases face the issues of non – scalability, difficult search mechanisms and burdensome information update routines. Furthermore, analysis of migrants’ profiles in these databases has remained unhandled till date and hence do not generate any knowledge. Objective: To design and develop an efficient and multidimensional knowledge discovery framework for E - Migrants databases. Method: In the proposed technique, results of complex calculations related to most probable On-Line Analytical Processing operations required by end users, are stored in the form of Decision Trees, at the pre- processing stage of data analysis. While browsing the Cube, these pre-computed results are called; thus offering Dynamic Cubing feature to end users at runtime. This data-tuning step reduces the query processing time and increases efficiency of required data warehouse operations. Results: Experiments conducted with Data Warehouse of around 1000 migrants’ profiles confirm the knowledge discovery power of this proposal. Using the proposed methodology, authors have designed a framework efficient enough to incorporate the amendments made in the E – Migrants Data Warehouse systems on regular intervals, which was totally missing in the traditional E – Migrants databases. Conclusion: The proposed methodology facilitate migrants to generate dynamic knowledge and visualize it in the form of dynamic cubes. Applying Business Intelligence mechanisms, blending it with tuned OLAP operations, the authors have managed to transform traditional datasets into intelligent migrants Data Warehouse.


1990 ◽  
Vol 68 (9) ◽  
pp. 1942-1947 ◽  
Author(s):  
Philippe Brunet ◽  
Bruno Sarrobert ◽  
Nicole Paris-Pireyre ◽  
Ange-Marie Risterucci

Two species of tomato, Lycopersicon esculentum Mill. var. EGE12P1 and Lycopersicon hirsutum Humb. & Bonpl. ecotype LA 1777, were submitted to two temperature treatments, 20 or 10 °C. After a short study of plant growth, we analysed the chemical composition (cations, anions, and amino acids) of xylem sap by high performance liquid chromatography. A comparison of fresh weight increase at 20 and 10 °C of both plant species showed that L. hirsutum was the least affected by low temperature. The volumes of secreted sap and the quantities of ions transported showed great disturbances in the sensitive species (L. esculentum), especially in the case of potassium. In xylem sap of both species studied, but only at 10 °C, we noticed the appearance of ammonium. The possibility of contamination during analytical processing was eliminated. Moreover, determinations of amino acids levels showed that ammonium did not arise from degradation of amides present in xylem sap. In any event, the proportion of nitrate absorbed and reduced in roots increased at low temperature; it is much more important in L. hirsutum and could constitute a tolerance factor to low temperatures. Key words: ammonium, low temperature, Lycopersicon, xylem sap.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-28
Author(s):  
Waqas Ahmed ◽  
Esteban Zimányi ◽  
Alejandro A. Vaisman ◽  
Robert Wrembel

Data warehouses (DWs) evolve in both their content and schema due to changes of user requirements, business processes, or external sources to name a few. Although multiple approaches using temporal and/or multiversion DWs have been proposed to handle these changes, an efficient solution for this problem is still lacking. The authors' approach is to separate concerns and use temporal DWs to deal with content changes, and multiversion DWs to deal with schema changes. To address the former, previously, they have proposed a temporal multidimensional (MD) model. In this paper, they propose a multiversion MD model for schema evolution to tackle the latter problem. The two models complement each other and allow managing both content and schema evolution. In this paper, the semantics of schema modification operators (SMOs) to derive various schema versions are given. It is also shown how online analytical processing (OLAP) operations like roll-up work on the model. Finally, the mapping from the multiversion MD model to a relational schema is given along with OLAP operations in standard SQL.


Sign in / Sign up

Export Citation Format

Share Document