scholarly journals Data Integration Project in Robologee

2021 ◽  
Vol 3 (2) ◽  
pp. 217-224
Author(s):  
Ratu Upisika Maha Misi ◽  
Johny Prihanto ◽  
Florentina Kurniasari ◽  
Noemi da Silva

Robologee is a sub-unit of PT. Bangun Satya Wacana is part of Kompas Gramedia which is focused in Education section for ages 7 to 12 years. Robologee is a diversification of the existing sub-units in PT. Bangun Satya Wacana. Robologee has branches located at Gramedia World so it is expected that it will have an impact on Gramedia traffic. Currently, Robologee is transforming in order to integrate data that will be stored in the cloud by Amazon Web Service.The goal of this project is that data can be accessed by various users and stored in one platform. In the analysis of the digital transformation project, 15 respondents have been determined who are parents as external customers. Based on the indicators used in DMM. It was found that Robologee's current condition is at the Advancing level. Based on the Roadmap this project is implemented for 1 year and consists of four stages. In the Budgeting analysis, Robologee has payback period of 1.7 years with an IRR of 7.512% greater than the expected return of 5% by the company. Then the NVP is in a positive number, so this project is feasible to implement.

1996 ◽  
Vol 86 (4) ◽  
pp. 936-945 ◽  
Author(s):  
Lind S. Gee ◽  
Douglas S. Neuhauser ◽  
Douglas S. Dreger ◽  
Michael E. Pasyanos ◽  
Robert A. Uhrhammer ◽  
...  

Abstract The Rapid Earthquake Data Integration project is a system for the fast determination of earthquake parameters in northern and central California based on data from the Berkeley Digital Seismic Network and the USGS Northern California Seismic Network. Program development started in 1993, and a prototype system began providing automatic information on earthquake location and magnitude in November of 1993 via commercial pagers and the Internet. Recent enhancements include the exchange of phase data with neighboring networks and the inauguration of processing for the determination of strong-motion parameters and seismic moment tensors.


Author(s):  
Bindi Kindermann ◽  
Sarah Hinde ◽  
Michael Abbondante

ABSTRACTObjectivesThe Australian Government’s new public sector data management agenda is initiating a national system for integrating public data and opening up access for policy makers and researchers. The Multi-agency Data Integration Project (‘the project’) is central to achieving these goals by bringing together nationally significant population datasets with the aim of streamlining the safe sharing of integrated government data. The project provides policy makers and researchers with safe access to linked, longitudinal information about the delivery of the Australian tax and transfer system, health services, along with rich demographic information. The project has been an essential step towards better enabling the Australian Government and research community to develop evidence-based policy and target services effectively, within a tight fiscal environment. The project has prompted government agencies to find new and more streamlined ways to work collaboratively to share and make best use of public data. ApproachThe first step of the project was to link a 2011 snapshot of four national administrative datasets with the 2011 Census. A cross-agency team of data analysts from five government agencies collaborated to evaluate the datasets and test whether the linked data could be used to answer policy questions. The linkage project included experimentation with different linking methodologies, linking strategies and information models for structuring the linkage. The evaluation tested whether the linked data was representative of key population groups of interest, and explored the validity of the content variables for measuring outcomes of interest. ResultsHigh linkage rates (between 80-95%) were achieved for the two-way linkages, and many population groups of interest were well represented. The work is confirming the value of the linkage for answering policy questions that had been difficult to address using existing approaches. The project developed ways of describing linkage quality to policy users and approaches to addressing linkage bias for different policy uses. ConclusionPublic sector data held by government has the power to improve life course outcomes for Australian people, households and businesses. The project has generated confidence and support for continued development of a central and streamlined integrated data system. It has also generated valuable insights about governance and how to scale up the linkage and dissemination system to support additional datasets and longitudinal data. This will maximise the value and utility of public data to support policy and research, in order to achieve a better understanding of, and deliver better outcomes for, the Australian community.


Author(s):  
Kai R. Larsen ◽  
Daniel S. Becker

Access to additional and relevant data will lead to better predictions from algorithms until we reach the point where more observations (cases) are no longer helpful to detect the signal, the feature(s), or conditions that inform the target. In addition to obtaining more observations, we can also look for additional features of interest that we do not currently have, at which point it will invariably be necessary to integrate data from different sources. This section introduces this process of data integration, starting with an introduction of two methods: “joins” (to access more features) and “unions” (to access more observations) and continues on to cover regular expressions, data summarization, crosstabs, data reduction and splitting, and data wrangling in all its flavors.


2011 ◽  
Vol 403-408 ◽  
pp. 1853-1858
Author(s):  
Qing Li ◽  
Yi Fei Pan ◽  
De Xiang Yang ◽  
Yan Wu ◽  
Guang Chen

Traditional data integration approaches cannot effectively cope with the autonomous and dynamic characteristics of data sources distributed over the Internet. They also neglect the users’ individualized integration requirements. Based on the real demands of data integration over the Internet for State Grid of China, this paper proposes a Service-Oriented Data Integration (SODI) approach. The main contributions include: propose a new data service model for attach more importance on the key concerns of data aspects hidden behind the service interfaces; propose a more flexible integration approaches to support users to integrate data on their demands. The effects of this approach have been proved in the real projects.


2021 ◽  
Vol 4 ◽  
Author(s):  
Graciela Muniz-Terrera ◽  
Ofer Mendelevitch ◽  
Rodrigo Barnes ◽  
Michael D. Lesh

When attempting to answer questions of interest, scientists often encounter hurdles that may stem from limited access to existing adequate datasets as a consequence of poor data sharing practices, constraining administrative practices. Further, when attempting to integrate data, differences in existing datasets also impose challenges that limit opportunities for data integration. As a result, the pace of scientific advancements is suboptimal. Synthetic data and virtual cohorts generated using innovative computational techniques represent an opportunity to overcome some of these limitations and consequently, to advance scientific developments. In this paper, we demonstrate the use of virtual cohorts techniques to generate a synthetic dataset that mirrors a deeply phenotyped sample of preclinical dementia research participants.


10.14311/666 ◽  
2005 ◽  
Vol 45 (1) ◽  
Author(s):  
A. Almarimi ◽  
J. Pokorný

Schema management is a basic problem in many database application domains such as data integration systems. Users need to access and manipulate data from several databases. In this context, in order to integrate data from distributed heterogeneous database sources, data integration systems demand the resolution of several issues that arise in managing schemas. In this paper, we present a brief survey of the problem of schema matching which is used for solving problems of schema integration processing. Moreover, we propose a technique for integrating and querying distributed heterogeneous XML schemas. 


Buildings ◽  
2019 ◽  
Vol 9 (5) ◽  
pp. 115 ◽  
Author(s):  
Ozan Koseoglu ◽  
Basak Keskin ◽  
Beliz Ozorhon

The Architecture Engineering and Construction (AEC) sector has been working on an increasing number of mega projects having large scale investments worldwide. The majority of these mega projects are infrastructure projects that are comparatively more difficult to manage in terms of yielding an expected return of investments while increasing quality and productivity. Today’s construction technology landscape offers a wide variety of innovative digital solutions for optimizing the project constraints of scope, time, cost, quality, and resources. Despite being one of the least digitized sectors, the AEC sector is currently ripe for adopting innovative digital solutions. It is observed that Building Information Modeling (BIM) has been rapidly adopted to tackle the ever-evolving challenges of mega infrastructure projects. This study investigates the challenges and enablers of utilizing an end-to-end BIM strategy for digital transformation of mega project delivery processes through a mega airport project case study, in order to contribute a solid strategic understanding in BIM implementation for mega infrastructure projects. The case study is followed with two-phased semi-structured interviews. Based on the findings, major challenges are sustaining continuous monitoring and controlling in the project execution, engineering complexity and aligning stakeholders’ BIM learning curves whereas strategic control mechanisms, incentivizing the virtual collaborative environment, and continuous digital delivery are major enablers.


Sign in / Sign up

Export Citation Format

Share Document