A Data Modelling Method for Big Data Warehouses

Author(s):  
Marta Nogueira ◽  
João Galvão ◽  
Maribel Y. Santos
Author(s):  
Yassine Ramdane ◽  
Nadia Kabachi ◽  
Omar Boussaid ◽  
Fadila Bentayeb

Author(s):  
Robert Vrbić

Cloud computing provides a powerful, scalable and flexible infrastructure into which one can integrate, previously known, techniques and methods of Data Mining. The result of such integration should be strong and capacitive platform that will be able to deal with the increasing production of data, or that will create the conditions for the efficient mining of massive amounts of data from various data warehouses with the aim of creating (useful) information or the production of new knowledge. This paper discusses such technology - the technology of big data mining, known as Cloud Data Mining (CDM).


Author(s):  
Khaled Dehdouh

In the big data warehouses context, a column-oriented NoSQL database system is considered as the storage model which is highly adapted to data warehouses and online analysis. Indeed, the use of NoSQL models allows data scalability easily and the columnar store is suitable for storing and managing massive data, especially for decisional queries. However, the column-oriented NoSQL DBMS do not offer online analysis operators (OLAP). To build OLAP cubes corresponding to the analysis contexts, the most common way is to integrate other software such as HIVE or Kylin which has a CUBE operator to build data cubes. By using that, the cube is built according to the row-oriented approach and does not allow to fully obtain the benefits of a column-oriented approach. In this chapter, the main contribution is to define a cube operator called MC-CUBE (MapReduce Columnar CUBE), which allows building columnar NoSQL cubes according to the columnar approach by taking into account the non-relational and distributed aspects when data warehouses are stored.


Author(s):  
Carlos Costa ◽  
Carina Andrade ◽  
Maribel Yasmina Santos
Keyword(s):  
Big Data ◽  

2020 ◽  
Vol 47 (11) ◽  
pp. 1086-1091
Author(s):  
Seongju Kang ◽  
Chaeeun Jeong ◽  
Kwangsue Chung

2013 ◽  
Author(s):  
Sreenivas R. Sukumar ◽  
Mohammed M. Olama ◽  
Allen W. McNair ◽  
James J. Nutaro

2016 ◽  
Vol 6 (6) ◽  
pp. 1241-1244 ◽  
Author(s):  
M. Faridi Masouleh ◽  
M. A. Afshar Kazemi ◽  
M. Alborzi ◽  
A. Toloie Eshlaghy

Extraction, Transformation and Loading (ETL) is introduced as one of the notable subjects in optimization, management, improvement and acceleration of processes and operations in data bases and data warehouses. The creation of ETL processes is potentially one of the greatest tasks of data warehouses and so its production is a time-consuming and complicated procedure. Without optimization of these processes, the implementation of projects in data warehouses area is costly, complicated and time-consuming. The present paper used the combination of parallelization methods and shared cache memory in systems distributed on the basis of data warehouse. According to the conducted assessment, the proposed method exhibited 7.1% speed improvement to kattle optimization instrument and 7.9% to talend instrument in terms of implementation time of the ETL process. Therefore, parallelization could notably improve the ETL process. It eventually caused the management and integration processes of big data to be implemented in a simple way and with acceptable speed.


Sign in / Sign up

Export Citation Format

Share Document