scholarly journals Deep Relations in Nordic and Baltic Biodiversity e-Infrastructures  (DeepDive)

Author(s):  
Matthias Obst ◽  
Jesper Bladt ◽  
Frank Hanssen ◽  
Holger Dettki ◽  
Anders Telenius ◽  
...  

The vision of the DeepDive program (https://neic.no/deepdive) is to establish a regional infrastructure network consisting of Nordic and Baltic data centers and information systems and to provide seamlessly operating regional data services, tools, and virtual laboratories. The program is funded by the Nordic e-Infrastructure Collaboration (https://neic.no) and was launched in 2017. Here we present some of the results and outcomes from the technical collaborations in the network. We will show examples of integration of biodiversity data services and portals though common Application Programming Interfaces (APIs) and Graphical User Interfaces (GUIs), describe our program to foster a biodiversity informatics community in the region, and explain advances in system interoperability that have been achieved over the past three years. We will also highlight the technical plans for further development and long-term sustainability of our Nordic and Baltic e-infrastructure network and make suggestions for further linkage to international information systems and ESFRI infrastructures.

Author(s):  
Carlo Cortese ◽  
Marco A. Calamari ◽  
Paolo Spagli

This paper aims to discuss 30 years of evolution of technical design tools (software and architecture) in GE Oil&Gas. Most important changes are highlighted, as well as some promising evolutionary paths. Legacy codes are the heritage of industrial companies from 70s and 80s. FORTRAN was used in order to automate the calculation the engineers had to perform to design turbines or compressors. The results of legacy codes were files that contain several information’s, relevant to stage geometry and performances, which could be used to generate drawings or to evaluate machines operability. However this large amount of data was spread on different computer and each designer was keeping track manually of the files modification. In order to better archive those data in 2000 most of the companies started to use databases and created modern user interfaces: in this way the users can dialog with a friendly interface and retrieve the data in a more organized format. The discussion on how to link the legacy codes and the database is still on going. Some GUIs are installed on different computer and interact on a centralized database, but in 2010 a more robust architecture started to be used transforming the GUI and the calculation in a centralized system based on web application. This allowed creating a solid and scalable environment since the legacy code and DB can be installed in servers reachable through the net by each user, simplifying the installation and maintenance issue. With INDUSTRIAL INTERNET advent more interaction between tools is required and Application Programming Interfaces (API) permit to have a direct interaction among tools without human interface, and the applications can directly interact with other programs.


2015 ◽  
Vol 18 (2) ◽  
pp. 152-167 ◽  
Author(s):  
Jonathan Yu ◽  
Benjamin Leighton ◽  
Nicholas Car ◽  
Shane Seaton ◽  
Jonathan Hodge

The environmental sciences are witnessing a data revolution as large amounts of data are being made available at an increasing rate. Many datasets are being published through operational monitoring programs, research activities and global earth observation virtual laboratories. An important aspect is the ability to query relevant metadata which can potentially provide useful information to discover, access and interpret environmental datasets, information about the data providers themselves, data services, data encodings, observation and measurement properties and data service endpoints. However, support for producing and accessing metadata descriptions in a flexible, extensible, easily integrated and easily discovered manner is lacking as current methods require interpreting multiple standards and formalisms. In this paper, we propose components to streamline discovery and access of hydrological and environmental data: a Data Provider Node ontology (DPN-O) which allows precise descriptions to be captured about datasets, data services and their interfaces; and a Data Brokering Layer which provides an Application Programming Interface (API) for registering metadata for discovery and query of registered DPN datasets. We discuss this work in the context of the eReefs project which is developing an integrated information platform for discovery and visualization of observational and modelled data of the Great Barrier Reef.


Author(s):  
Grigori Fursin

This article provides the motivation and overview of the Collective Knowledge Framework (CK or cKnowledge). The CK concept is to decompose research projects into reusable components that encapsulate research artifacts and provide unified application programming interfaces (APIs), command-line interfaces (CLIs), meta descriptions and common automation actions for related artifacts. The CK framework is used to organize and manage research projects as a database of such components. Inspired by the USB ‘plug and play’ approach for hardware, CK also helps to assemble portable workflows that can automatically plug in compatible components from different users and vendors (models, datasets, frameworks, compilers, tools). Such workflows can build and run algorithms on different platforms and environments in a unified way using the customizable CK program pipeline with software detection plugins and the automatic installation of missing packages. This article presents a number of industrial projects in which the modular CK approach was successfully validated in order to automate benchmarking, auto-tuning and co-design of efficient software and hardware for machine learning and artificial intelligence in terms of speed, accuracy, energy, size and various costs. The CK framework also helped to automate the artifact evaluation process at several computer science conferences as well as to make it easier to reproduce, compare and reuse research techniques from published papers, deploy them in production, and automatically adapt them to continuously changing datasets, models and systems. The long-term goal is to accelerate innovation by connecting researchers and practitioners to share and reuse all their knowledge, best practices, artifacts, workflows and experimental results in a common, portable and reproducible format at cKnowledge.io . This article is part of the theme issue ‘Reliability and reproducibility in computational science: implementing verification, validation and uncertainty quantification in silico ’.


2010 ◽  
Vol 27 (3) ◽  
pp. 207-216 ◽  
Author(s):  
Luis Iribarne ◽  
Nicolás Padilla ◽  
Javier Criado ◽  
José-Andrés Asensio ◽  
Rosa Ayala

2021 ◽  
Vol 13 (6) ◽  
pp. 3010
Author(s):  
Tobias Menzel ◽  
Timm Teubner

The Green Information Systems research stream was initiated by leading information systems researchers to address climate change through information and communications technology. This paper responds to a call for practical research into the design of information systems that support consumers in their decision making in favour of sustainable products. We apply an exploratory approach to improve understanding of regional trust cues in the energy sector and how these could drive the sector’s decentralisation. We explore the still emerging phenomenon of regional text and imagery on digital user interfaces via a multi-method process including quantitative and qualitative content analysis. Our findings suggest that regional energy providers systematically employ regional textual and pictorial trust cues on their websites. We further lay the ground for future experimental work on this matter by defining terms and concepts and systematically capturing design elements. We outline practical implications for designing user interfaces in the energy sector and discuss how this could drive the sector’s platformisation and sustainabilisation. In addition, we discuss implications for consumers who could become the target of regional washing attempts, in other words providers applying regional cues to create a regional company image in the absence of actual regionality.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 317
Author(s):  
Chithambaramani Ramalingam ◽  
Prakash Mohan

The increasing demand for cloud computing has shifted business toward a huge demand for cloud services, which offer platform, software, and infrastructure for the day-to-day use of cloud consumers. Numerous new cloud service providers have been introduced to the market with unique features that assist service developers collaborate and migrate services among multiple cloud service providers to address the varying requirements of cloud consumers. Many interfaces and proprietary application programming interfaces (API) are available for migration and collaboration services among cloud providers, but lack standardization efforts. The target of the research work was to summarize the issues involved in semantic cloud portability and interoperability in the multi-cloud environment and define the standardization effort imminently needed for migrating and collaborating services in the multi-cloud environment.


2007 ◽  
Vol 46 (04) ◽  
pp. 476-483 ◽  
Author(s):  
M. Marschollek ◽  
K.-H. Wolf ◽  
R. Haux ◽  
O. J. Bott

Summary Objectives: To analyze utilization of sensor technology in telemonitoring and home care and to discuss concepts and challenges of sensor-enhanced regional health information systems (rHIS). Methods: The study is based upon experience in sensor-based telemedicine and rHIS projects, and on an analysis of HIS-related journal publications from 2003 to 2005 conducted in the context of publishing the IMIA Yearbook of Medical Informatics. Results: Health-related parameters that are subject to sensor-based measurement in home care and tele-monitoring are identified. Publications related to tele-monitoring, home care and smart houses are analyzed concerning scope and utilization of sensor technology. Current approaches for integrating sensor technology in rHIS based on a corresponding eHealth infrastructure are identified. Based on a coarse architecture of home care and telemonitoring systems ten challenges for sensor-enhanced rHIS are identified and discussed: integration of home and health telematic platforms towards a sensor-enhanced telematic platform, transmission rate guarantees, ad hoc connectivity, cascading data analysis, remote configuration, message and alert logistic, sophisticated user interfaces, unobtrusiveness, data safety and security, and electronic health record integration. Conclusions: Utilization of sensor technology in health care is an active field of research. Currently few research projects and standardization initiatives focus on general architectural considerations towards suitable telematic platforms for establishing sensor-enhanced rHIS. Further research finalized by corresponding standardization is needed. Part 2 of this paperwill present experiences with a research prototype for a sensor-enhanced rHIS telematic platform.


Sign in / Sign up

Export Citation Format

Share Document