A Multidimensional Model of Data Warehouses

2014 ◽  
Vol 989-994 ◽  
pp. 1657-1659
Author(s):  
Zu Yi Chen ◽  
Tai Xiang Zhao

Reducing query time by means of selecting a proper set of materialized views with a lower cost is crucial for effcient datawarehousing. The database, however, needs to be utilized more, by providing a functional environment of probability analysis. The objective of this paper is to improve the effectiveness of utilizing historical cost data in an analytical OLAP (On-Line Analytical Processing) environment. The results show that the OLAP environment can help understand the uncertainties inconstruction cost estimate, and provide a way for projecting more reliable construction costs.

2018 ◽  
Vol 9 (2) ◽  
pp. 46-68
Author(s):  
Omar Khrouf ◽  
Kais Khrouf ◽  
Jamel Feki

There is an explosion in the amount of textual documents that have been generated and stored in recent years. Effective management of these documents is essential for better exploitation in decisional analyses. In this context, the authors propose their CobWeb multidimensional model based on standard facets and dedicated to the OLAP (on-line analytical processing) of XML documents; it aims to provide decision makers with facilities for expressing their analytical queries. Secondly, they suggest new visualization operators for OLAP query results by introducing the concept of Tag clouds as a means to help decision-makers to display OLAP results in an intuitive format and focus on main concepts. The authors have developed a software prototype called MQF (Multidimensional Query based on Facets) to support their proposals and then tested it on documents from the PubMed collection.


Author(s):  
Salman Ahmed Shaikh ◽  
Kousuke Nakabasami ◽  
Toshiyuki Amagasa ◽  
Hiroyuki Kitagawa

Data warehousing and multidimensional analysis go side by side. Data warehouses provide clean and partially normalized data for fast, consistent, and interactive multidimensional analysis. With the advancement in data generation and collection technologies, businesses and organizations are now generating big data (defined by 3Vs; i.e., volume, variety, and velocity). Since the big data is different from traditional data, it requires different set of tools and techniques for processing and analysis. This chapter discusses multidimensional analysis (also known as on-line analytical processing or OLAP) of big data by focusing particularly on data streams, characterized by huge volume and high velocity. OLAP requires to maintain a number of materialized views corresponding to user queries for interactive analysis. Precisely, this chapter discusses the issues in maintaining the materialized views for data streams, the use of special window for the maintenance of materialized views and the coupling issues of stream processing engine (SPE) with OLAP engine.


Author(s):  
Dimitri Theodoratos ◽  
Wugang Xu ◽  
Alkis Simitsis

A Data Warehouse (DW) is a repository of information retrieved from multiple, possibly heterogeneous, autonomous, distributed databases and other information sources for the purpose of complex querying, analysis and decision support. Data in the DW are selectively collected from the sources, processed in order to resolve inconsistencies, and integrated in advance (at design time) before data loading. DW data are usually organized multidimensionally to support On-Line Analytical Processing (OLAP). A DW can be abstractly seen as a set of materialized views defined over the source relations. During the initial design of a DW, the DW designer faces the problem of deciding which views to materialize in the DW. This problem has been addressed in the literature for different classes of queries and views and with different design goals.


2020 ◽  
Vol 16 (4) ◽  
pp. 1-25
Author(s):  
Maha Azabou ◽  
Ameen Banjar ◽  
Jamel Omar Feki

The data warehouse community has paid particular attention to the document warehouse (DocW) paradigm during the last two decades. However, some important issues related to the semantics are still pending and therefore need a deep research investigation. Indeed, the semantic exploitation of the DocW is not yet mature despite it representing a main concern for decision-makers. This paper aims to enhancing the multidimensional model called Diamond Document Warehouse Model with semantics aspects; in particular, it suggests semantic OLAP (on-line analytical processing) operators for querying the DocW.


Author(s):  
Jamel Feki

Within today’s competitive economic context, information acquisition, analysis and exploitation became strategic and unavoidable requirements for every enterprise. Moreover, in order to guarantee their persistence and growth, enterprises are forced, henceforth, to capitalize expertise in this domain. Data warehouses (DW) emerged as a potential solution answering the needs of storage and analysis of large data volumes. In fact, a DW is a database system specialized in the storage of data used for decisional ends. This type of systems was proposed to overcome the incapacities of OLTP (On-Line Transaction Processing) systems in offering analysis functionalities. It offers integrated, consolidated and temporal data to perform decisional analyses. However, the different objectives and functionalities between OLTP and DW systems created a need for a development method appropriate for DW. Indeed, data warehouses still deploy considerable efforts and interests of a large community of both software editors of decision support systems (DSS) and researchers (Kimball, 1996; Inmon, 2002). Current software tools for DW focus on meeting end-user needs. OLAP (On-Line Analytical Processing) tools are dedicated to multidimensional analyses and graphical visualization of results (e.g., Oracle Discoverer?); some products permit the description of DW and Data Mart (DM) schemes (e.g., Oracle Warehouse Builder?). One major limit of these tools is that the schemes must be built beforehand and, in most cases, manually. However, such a task can be tedious, error-prone and time-consuming, especially with heterogeneous data sources. On the other hand, the majority of research efforts focuses on particular aspects in DW development, cf., multidimensional modeling, physical design (materialized views (Moody & Kortnik, 2000), index selection (Golfarelli, Rizzi, & Saltarelli 2002), schema partitioning (Bellatreche & Boukhalfa, 2005)) and more recently applying data mining for a better data interpretation (Mikolaj, 2006; Zubcoff, Pardillo & Trujillo, 2007). While these practical issues determine the performance of a DW, other just as important, conceptual issues (e.g., requirements specification and DW schema design) still require further investigations. In fact, few propositions were put forward to assist in and/or to automate the design process of DW, cf., (Bonifati, Cattaneo, Ceri, Fuggetta & Paraboschi, 2001; Hahn, Sapia & Blaschka, 2000; Phipps & Davis 2002; Peralta, Marotta & Ruggia, 2003).


Author(s):  
Harkiran Kaur ◽  
Kawaljeet Singh ◽  
Tejinder Kaur

Background: Numerous E – Migrants databases assist the migrants to locate their peers in various countries; hence contributing largely in communication of migrants, staying overseas. Presently, these traditional E – Migrants databases face the issues of non – scalability, difficult search mechanisms and burdensome information update routines. Furthermore, analysis of migrants’ profiles in these databases has remained unhandled till date and hence do not generate any knowledge. Objective: To design and develop an efficient and multidimensional knowledge discovery framework for E - Migrants databases. Method: In the proposed technique, results of complex calculations related to most probable On-Line Analytical Processing operations required by end users, are stored in the form of Decision Trees, at the pre- processing stage of data analysis. While browsing the Cube, these pre-computed results are called; thus offering Dynamic Cubing feature to end users at runtime. This data-tuning step reduces the query processing time and increases efficiency of required data warehouse operations. Results: Experiments conducted with Data Warehouse of around 1000 migrants’ profiles confirm the knowledge discovery power of this proposal. Using the proposed methodology, authors have designed a framework efficient enough to incorporate the amendments made in the E – Migrants Data Warehouse systems on regular intervals, which was totally missing in the traditional E – Migrants databases. Conclusion: The proposed methodology facilitate migrants to generate dynamic knowledge and visualize it in the form of dynamic cubes. Applying Business Intelligence mechanisms, blending it with tuned OLAP operations, the authors have managed to transform traditional datasets into intelligent migrants Data Warehouse.


2020 ◽  
Vol 70 (4) ◽  
pp. 482-492
Author(s):  
Hongmei Gu ◽  
Shaobo Liang ◽  
Richard Bergman

Abstract Mass timber building materials such as cross-laminated timber (CLT) have captured attention in mid- to high-rise building designs because of their potential environmental benefits. The recently updated multistory building code also enables greater utilization of these wood building materials. The cost-effectiveness of mass timber buildings is also undergoing substantial analysis. Given the relatively new presence of CLT in United States, high front-end construction costs are expected. This study presents the life-cycle cost (LCC) for a 12-story, 8,360-m2 mass timber building to be built in Portland, Oregon. The goal was to assess its total life-cycle cost (TLCC) relative to a functionally equivalent reinforced-concrete building design using our in-house-developed LCC tool. Based on commercial construction cost data from the RSMeans database, a mass timber building design is estimated to have 26 percent higher front-end costs than its concrete alternative. Front-end construction costs dominated the TLCC for both buildings. However, a decrease of 2.4 percent TLCC relative to concrete building was observed because of the estimated longer lifespan and higher end-of-life salvage value for the mass timber building. The end-of-life savings from demolition cost or salvage values in mass timber building could offset some initial construction costs. There are minimal historical construction cost data and lack of operational cost data for mass timber buildings; therefore, more studies and data are needed to make the generalization of these results. However, a solid methodology for mass timber building LCC was developed and applied to demonstrate several cost scenarios for mass timber building benefits or disadvantages.


Sign in / Sign up

Export Citation Format

Share Document