Modern Software Infrastructure for Industrial Selection Tools

Author(s):  
Carlo Cortese ◽  
Marco A. Calamari ◽  
Paolo Spagli

This paper aims to discuss 30 years of evolution of technical design tools (software and architecture) in GE Oil&Gas. Most important changes are highlighted, as well as some promising evolutionary paths. Legacy codes are the heritage of industrial companies from 70s and 80s. FORTRAN was used in order to automate the calculation the engineers had to perform to design turbines or compressors. The results of legacy codes were files that contain several information’s, relevant to stage geometry and performances, which could be used to generate drawings or to evaluate machines operability. However this large amount of data was spread on different computer and each designer was keeping track manually of the files modification. In order to better archive those data in 2000 most of the companies started to use databases and created modern user interfaces: in this way the users can dialog with a friendly interface and retrieve the data in a more organized format. The discussion on how to link the legacy codes and the database is still on going. Some GUIs are installed on different computer and interact on a centralized database, but in 2010 a more robust architecture started to be used transforming the GUI and the calculation in a centralized system based on web application. This allowed creating a solid and scalable environment since the legacy code and DB can be installed in servers reachable through the net by each user, simplifying the installation and maintenance issue. With INDUSTRIAL INTERNET advent more interaction between tools is required and Application Programming Interfaces (API) permit to have a direct interaction among tools without human interface, and the applications can directly interact with other programs.

2020 ◽  
Author(s):  
Diane C. Saunders ◽  
James Messmer ◽  
Irina Kusmartseva ◽  
Maria L. Beery ◽  
Mingder Yang ◽  
...  

SummaryHuman tissue phenotyping generates complex spatial information from numerous imaging modalities, yet images typically become static figures for publication and original data and metadata are rarely available. While comprehensive image maps exist for some organs, most resources have limited support for multiplexed imaging or have non-intuitive user interfaces. Therefore, we built a Pancreatlas™ resource that integrates several technologies into a novel interface, allowing users to access richly annotated web pages, drill down to individual images, and deeply explore data online. The current version of Pancreatlas contains over 800 unique images acquired by whole-slide scanning, confocal microscopy, and imaging mass cytometry, and is available at https://www.pancreatlas.org. To create this human pancreas-specific biological imaging resource, we developed a React-based web application and Python-based application programming interface, collectively called Flexible Framework for Integrating and Navigating Data (FFIND), which can be adapted beyond Pancreatlas to meet countless imaging or other structured data management needs.


Author(s):  
Daniel Thayer ◽  
Muhammad Elmessary ◽  
Daniel Mallory ◽  
Pete Arnold ◽  
Michal Cichowski ◽  
...  

Background/RationaleLinked administrative datasets offer great potential for research, but also present major challenges—including the preparation of operational data into a form suitable for efficient research, complex and computationally demanding analysis, and the need to capture and share information about dataset contents and research methods. Main AimThe analytical services team in the Secure Anonymised Information Linkage (SAIL) Databank is creating interconnected tools and systems to automate the preparation and analysis of research data and to curate information about datasets and research methods. Our underlying goal is to make linked data research orders of magnitude faster and cheaper, as well as improve its consistency and quality. MethodsSeveral key developments are ongoing: Automation of data quality checking. Management of dataset metadata. Processing of raw source datasets into cleaned, research-ready data assets. The Concept Library, an application for creating, using, and sharing knowledge about research definitions and methods. A suite of R packages for analysis. Web Application Programming Interfaces will allow these pieces to work together as an integrated system enabling efficient research. ResultsInitial versions of dataset quality checking, cleaned datasets, and R code to implement common tasks are already in day-to-day use by researchers within SAIL. An advisory group has been convened to help guide the work. For example, shared library code that flags conditions within health data has been used across multiple projects; a cleaned dataset measuring follow-up within primary care has been used by more than 100 projects. ConclusionOur proof-of-concept work demonstrates the ability of shared code and cleaned data to meet needs across multiple projects, saving effort and standardizing results. Ongoing work to develop and integrate these tools should further streamline the research process, increasing the output and public benefit of SAIL and other data sources.


2013 ◽  
Vol 10 (4) ◽  
pp. 82-101 ◽  
Author(s):  
Buqing Cao ◽  
Jianxun Liu ◽  
Mingdong Tang ◽  
Zibin Zheng ◽  
Guangrong Wang

With the rapid development of Web2.0 and its related technologies, Mashup services (i.e., Web applications created by combining two or more Web APIs) are becoming a hot research topic. The explosion of Mashup services, especially the functionally similar or equivalent services, however, make services discovery more difficult than ever. In this paper, we present an approach to recommend Mashup services to users based on usage history and service network. This approach firstly extracts users' interests from their Mashup service usage history and builds a service network based on social relationships information among Mashup services, Web application programming interfaces (APIs) and their tags. The approach then leverages the target user's interest and the service social relationship to perform Mashup service recommendation. Large-scale experiments based on a real-world Mashup service dataset show that the authors' proposed approach can effectively recommend Mashup services to users with excellent performance. Moreover, a Mashup service recommendation prototype system is developed.


2021 ◽  
Vol 11 (2) ◽  
pp. 683
Author(s):  
Juuso Autiosalo ◽  
Riku Ala-Laurinaho ◽  
Joel Mattila ◽  
Miika Valtonen ◽  
Valtteri Peltoranta ◽  
...  

Industrial Internet of Things practitioners are adopting the concept of digital twins at an accelerating pace. The features of digital twins range from simulation and analysis to real-time sensor data and system integration. Implementation examples of modeling-oriented twins are becoming commonplace in academic literature, but information management-focused twins that combine multiple systems are scarce. This study presents, analyzes, and draws recommendations from building a multi-component digital twin as an industry-university collaboration project and related smaller works. The objective of the studied project was to create a prototype implementation of an industrial digital twin for an overhead crane called “Ilmatar”, serving machine designers and maintainers in their daily tasks. Additionally, related cases focus on enhancing operation. This paper describes two tools, three frameworks, and eight proof-of-concept prototypes related to digital twin development. The experiences show that good-quality Application Programming Interfaces (APIs) are significant enablers for the development of digital twins. Hence, we recommend that traditional industrial companies start building their API portfolios. The experiences in digital twin application development led to the discovery of a novel API-based business network framework that helps organize digital twin data supply chains.


Author(s):  
Abdelhamid Malki ◽  
Sidi Mohammed Benslimane

Mashups allowed a significant advance in the automation of interactions between applications and Web resources. In particular, the combination of web Application Programming Interfaces (APIs) is seen as a strength, which can meet the complex needs by combining the functionality and data from multiple services within a single Mashup application. Automating the process of building Mashup based mainly on the Semantics Web APIs which facilitate to the developer their selection and matching. In this paper, we introduce reference architecture with six layers representing the main functional blocks for annotating, combining and deploying Web APIs in Cloud environment. We introduce Semantic Annotation for Web Application Description Language (SAWADL), an extension of the Web Application Description Language (WADL) that allows the semantization of the REST Web Service. The proposed architecture uses the Cloud Computing technology as a promising solution to increase the number of public API and therefore making the engineering process of Mashup applications more agile and more flexible.


Author(s):  
Saeid Heshmatisafa ◽  
Marko Seppänen

Abstract Many companies have followed the trend toward exposing their business assets through open (i.e., Web) application programming interfaces (APIs). However, these firms appear to have adopted API technology largely to meet their customers’ needs and demands. The pressures on industries to develop, implement, and maintain API products and services can prevent companies from gaining a greater awareness of API development’s benefits. Firms may thus miss out on related monetary or non-monetary exploitation of their business assets. This study explored the status of the API economy and development among Finnish industries. The dataset comprised publicly available information from 226 private and public organizations representing a variety of industries, such as industrial, consumer goods, and services sectors. The current status of API readiness, types, protocols, and monetization models is presented to provide a more comprehensive overview.


Author(s):  
Matthias Obst ◽  
Jesper Bladt ◽  
Frank Hanssen ◽  
Holger Dettki ◽  
Anders Telenius ◽  
...  

The vision of the DeepDive program (https://neic.no/deepdive) is to establish a regional infrastructure network consisting of Nordic and Baltic data centers and information systems and to provide seamlessly operating regional data services, tools, and virtual laboratories. The program is funded by the Nordic e-Infrastructure Collaboration (https://neic.no) and was launched in 2017. Here we present some of the results and outcomes from the technical collaborations in the network. We will show examples of integration of biodiversity data services and portals though common Application Programming Interfaces (APIs) and Graphical User Interfaces (GUIs), describe our program to foster a biodiversity informatics community in the region, and explain advances in system interoperability that have been achieved over the past three years. We will also highlight the technical plans for further development and long-term sustainability of our Nordic and Baltic e-infrastructure network and make suggestions for further linkage to international information systems and ESFRI infrastructures.


Sign in / Sign up

Export Citation Format

Share Document