Object Migration Tool for Data Warehouses

Author(s):  
Nayem Rahman ◽  
Peter W. Burkhardt ◽  
Kevin W. Hibray

Data warehouses contain numerous software applications and thousands of objects that make those applications work. Many companies maintain multiple data warehouses depending on business requirements for example development, testing and production. Installing objects and keeping them synchronized across all environments can be a challenging task due to the sheer number of objects and complexity. Software objects stored in a Source Control system must be installed on target warehouse environments. Manual copy procedures are possible but very inefficient. Developers spend much time preparing installation and migration scripts that are prone to syntax errors. This paper proposes an Object Migration and Apply Tool (OMAT) that automates software installation across all warehouses for anyone using manual procedures. An automated tool can help eliminate error prone manual procedures, increase flawless object installation and reduce installation time. The OMAT tool is easy to use through a web browser and includes many useful features that support the development life cycle, the Sarbanes-Oxley (SOX) Act requirements and enforce numerous business requirements. OMAT is designed to support the construction and maintenance of an enterprise-wide, strategic data warehouse faster and better.

Author(s):  
François Pinet ◽  
Myoung-Ah Kang ◽  
Kamal Boulil ◽  
Sandro Bimonte ◽  
Gil De Sousa ◽  
...  

Recent research works propose using Object-Oriented (OO) approaches, such as UML to model data warehouses. This paper overviews these recent OO techniques, describing the facts and different analysis dimensions of the data. The authors propose a tutorial of the Object Constraint Language (OCL) and show how this language can be used to specify constraints in OO-based models of data warehouses. Previously, OCL has been only applied to describe constraints in software applications and transactional databases. As such, the authors demonstrate in this paper how to use OCL to represent the different types of data warehouse constraints. This paper helps researchers working in the fields of business intelligence and decision support systems, who wish to learn about the major possibilities that OCL offer in the context of data warehouses. The authors also provide general information about the possible types of implementation of multi-dimensional models and their constraints.


Author(s):  
Nahid Anwar ◽  
Susmita Kar

Software testing is the process of running an application with the intent of finding software bugs (errors or other defects). Software applications demand has pushed the quality assurance of developed software towards new heights. It has been considered as the most critical stage of the software development life cycle. Testing can analyze the software item to identify the disparity between actual and prescribed conditions and to assess the characteristics of the software. Software testing leads to minimizing errors and cut down software costs. For this purpose, we discuss various software testing techniques and strategies. This paper aims to study diverse as well as improved software testing techniques for better quality assurance purposes.


Software applications are widely used in almost every field now-a-days. A full functional application is developed after passing through different phases of Software Development Life Cycle (SDLC), till the end user starts using it. Testing the application is one of the major tasks of Software Development Life Cycle known as SDLC. This activity is done for the effective performance, tracking out causes of inefficiencies and verifying whether a module or application fulfills the requirements. The purpose is to avoid defects, abnormal behavior, minimize risks of failure and ensure that the system is defect free. Testing can be done in both manually and automatically. Manual ways are not trust worthy because humans make mistakes and machines don’t if it’s programmed correctly. In this paper we have performed critical analysis on the automated testing tools available for .NET (which is a software development platform by Microsoft) determines their effects on effort, quality, productivity and cost of the product [9].


2021 ◽  
Author(s):  
Rampueng Kawphoy ◽  
Phanat Thatmali

Abstract The objective of this study was to improve the accuracy of condensate gas ratio (CGR) prediction in the Pailin and Moragot areas. Conventional method to predict liquid component reserves used only long-life condensate gas ratio (long-life CGR) from near-by production platform(s). The long-life CGR data are available in the mature production platforms which commonly takes 1-2 years to observe the decline trend so that there is no available data in the new drilled area and non-production area. This might cause inaccurate prediction of liquid reserves in the future platform especially in the platform locates far away from the mature production area. Multiple data which are basin modeling, geochemical data, drill-stem test, and batch-level production were analyzed and integrated to improve the accuracy of CGR prediction and understand geological reasons of high or low liquid production platform. These data can improve the confident level for CGR estimation in the non-production area and help identify potentially high liquid production platforms. The results show that the high liquid production in Pailin and Moragot fields related with the differentiation of source rock and migration process. There are three (3) separated trends in Pailin field and two (2) trends in Moragot field using geochemical data and basin modeling data. The local DST data has been integrated to confirm the extent of potentially high liquid production in several future platforms which locates in non-production area. Also, the updated production data has been re-visited to estimate the new CGR for the project located near-by production platform.


2016 ◽  
Vol 6 (2) ◽  
pp. 21-37 ◽  
Author(s):  
Nayem Rahman

Maintaining a stable data warehouse becomes quite a challenge if discipline is not applied to code development, code changes, code performance, system resource usage and configuration of integration specification. As the size of the data warehouse increases the value it brings to an organization tends to increase. However these benefits come at a cost of maintaining the applications and running the data warehouse efficiently on a twenty four hours a day and seven days a week basis. Governance is all about bringing discipline and control in the form of guidelines for application developers and IT integration engineers to follow, with a goal that the behavior of a data warehouse application becomes predictable and manageable. In this article the authors have defined and explained a set of data warehouse governance best practices based on their real-world experience and insights drawn from industry and academic papers. Data warehouse governance can also support the development life cycle, maintenance, data architecture, data quality assurance, the Sarbanes-Oxley (SOX) Act requirements and enforce business requirements.


2012 ◽  
Vol 263-266 ◽  
pp. 1482-1486
Author(s):  
Chuan Sheng Zhou

Alone with “The Internet of Things” development and expanding in the areas of life, it also brings some big challenges to the traditional software application design and development; especially with the corresponding technologies and strategies of the internet of things enhancement and improvement, and further more with some new equipments and technologies appended or changed in the existing environment, it needs rapidly and easily add some new functionalities to the existing working software applications. But the traditional application design and development still thinking of solution from designers and developers and not from the business point of view, this results in the traditional software applications and its scalability cannot be easily and rapidly satisfy to business requirements. Here, by research on XML, software bus, software component and task-oriented technologies, it illustrates a new way for the software application design and development and try to use task-oriented technology to improve the software application flexibility and scalability to satisfy to enterprise business changes.


2019 ◽  
Vol 72 (7) ◽  
pp. 1295-1299
Author(s):  
Оleg А. Bychkov ◽  
Vitalii E. Kondratiuk ◽  
Nina G. Bychkova ◽  
Zemfira V. Morozova ◽  
Svetlana A. Bychkova ◽  
...  

Introduction: Multiple data available indicate high prevalence of comorbid abnormalities in gouty arthritis patients, namely, high incidence of arterial hypertension, coronary artery disease, stroke, atherosclerosis of carotid arteries, vascular dementia. For instance, hypertension is found in 36-41% gout patients, and combined with metabolic syndrome it may reach 80%. The aim: Studying features of clinical course, lipid profile and immune status in patients with combined hypertension and gout. Materials and methods: The study involved examination of 137 male patients with stage II hypertension, average age 56.9±3.4. All patients underwent echocardiography with estimation of the left ventricular mass index to verify hypertension stage, blood chemistry test with estimation of uric acid level, as well as lipid profile and immune status. Results: We have found significant disorders in the lipid profile of blood serum in patients with combined hypertension and gout. Positively higher percentage of activated T-cells was found in patients with combined hypertension and gout, both with early (CD3+CD25+) and late (CD3+HLA-DR+) activation marker, as well as those expressing FAS receptor, and ready to enter into apoptosis. Conclusion: We have identified abnormalities in adhesion and cooperation of immune competent cells, resulting in more intense activation of the same, effector functions and migration to the area of inflammation in the vessel wall.


Author(s):  
Francesco Di Tria ◽  
Ezio Lefons ◽  
Filippo Tangorra

Big Data warehouses are a new class of databases that largely use unstructured and volatile data for analytical purpose. Examples of this kind of data sources are those coming from the Web, such as social networks and blogs, or from sensor networks, where huge amounts of data may be available only for short intervals of time. In order to manage massive data sources, a strategy must be adopted to define multidimensional schemas in presence of fast-changing situations or even undefined business requirements. In the paper, we propose a design methodology that adopts agile and automatic approaches, in order to reduce the time necessary to integrate new data sources and to include new business requirements on the fly. The data are immediately available for analyses, since the underlying architecture is based on a virtual data warehouse that does not require the importing phase. Examples of application of the methodology are presented along the paper in order to show the validity of this approach compared to a traditional one.


Sign in / Sign up

Export Citation Format

Share Document