scholarly journals Customization, extension and reuse of outdated hydrogeological software

2020 ◽  
Vol 18 ◽  
pp. 1-11
Author(s):  
A. Serrano-Juan ◽  
R. Criollo ◽  
E. Vázquez-Suñé ◽  
M. Alcaraz ◽  
C. Ayora ◽  
...  

Each scientist is specialized in his or her field of research and in the tools that he or she uses during the research in a specified site. Thus, he or she is the most suitable person for improving the tools by overcoming their limitations to realize faster and higher quality analysis. However, most scientists are not software developers. Hence, it is necessary to provide them with an easy approach that enables non-software developers to improve and customize their tools. This paper presents an approach for easily improving and customizing any hydrogeological software. It is the result of experiences with updating several interdisciplinary case studies. The main insights of this approachhave been demonstrated using four examples: MIX (FORTRAN-based), BrineMIX (C++-based), EasyQuim and EasyBal (both spreadsheet-based). The improved software has been proven to be a better tool for enhanced analysis by substantially reducing the computation time and the tedious processing of the input and output data files.

2021 ◽  
Vol 251 ◽  
pp. 02020
Author(s):  
C. Acosta-Silva ◽  
A. Delgado Peris ◽  
J. Flix ◽  
J. Frey ◽  
J.M. Hernández ◽  
...  

CMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to the Internet. Pilot agents and payload jobs need to interact with external services from the compute nodes: access to the application software (CernVM-FS) and conditions data (Frontier), management of input and output data files (data management services), and job management (HTCondor). Finding an alternative route to these services is challenging. Seamless integration in the CMS production system without causing any operational overhead is a key goal. The case of the Barcelona Supercomputing Center (BSC), in Spain, is particularly challenging, due to its especially restrictive network setup. We describe in this paper the solutions developed within CMS to overcome these restrictions, and integrate this resource in production. Singularity containers with application software releases are built and pre-placed in the HPC facility shared file system, together with conditions data files. HTCondor has been extended to relay communications between running pilot jobs and HTCondor daemons through the HPC shared file system. This operation mode also allows piping input and output data files through the HPC file system. Results, issues encountered during the integration process, and remaining concerns are discussed.


2018 ◽  
Vol 1 (1) ◽  
Author(s):  
Alexander Andonov ◽  
◽  
◽  

On the basis of the latest developments, an improved model of underwater communication channel is presented. A set of programs to allow calculation of the basic parameters of the channel over a wide range of parameters has been created. Mathematical models for calculating the spreading factor are developed. A process of creating the model is reviewed, so that the resulting model should become easily expandable. Userfriendly information-transfer interface is set between the programs and input and output data files.


Author(s):  
Vasily Bulatov ◽  
Wei Cai

This book presents a broad collection of models and computational methods - from atomistic to continuum - applied to crystal dislocations. Its purpose is to help students and researchers in computational materials sciences to acquire practical knowledge of relevant simulation methods. Because their behavior spans multiple length and time scales, crystal dislocations present a common ground for an in-depth discussion of a variety of computational approaches, including their relative strengths, weaknesses and inter-connections. The details of the covered methods are presented in the form of "numerical recipes" and illustrated by case studies. A suite of simulation codes and data files is made available on the book's website to help the reader "to learn-by-doing" through solving the exercise problems offered in the book.


2021 ◽  
Vol 13 (13) ◽  
pp. 7354
Author(s):  
Jiekun Song ◽  
Xiaoping Ma ◽  
Rui Chen

Reverse logistics is an important way to realize sustainable production and consumption. With the emergence of professional third-party reverse logistics service providers, the outsourcing model has become the main mode of reverse logistics. Whether the distribution of cooperative profit among multiple participants is fair or not determines the quality of the implementation of the outsourcing mode. The traditional Shapley value model is often used to distribute cooperative profit. Since its distribution basis is the marginal profit contribution of each member enterprise to different alliances, it is necessary to estimate the profit of each alliance. However, it is difficult to ensure the accuracy of this estimation, which makes the distribution lack of objectivity. Once the actual profit share deviates from the expectation of member enterprise, the sustainability of the reverse logistics alliance will be affected. This study considers the marginal efficiency contribution of each member enterprise to the alliance and applies it to replace the marginal profit contribution. As the input and output data of reverse logistics cannot be accurately separated from those of the whole enterprise, they are often uncertain. In this paper, we assume that each member enterprise’s input and output data are fuzzy numbers and construct an efficiency measurement model based on fuzzy DEA. Then, we define the characteristic function of alliance and propose a modified Shapley value model to fairly distribute cooperative profit. Finally, an example comprising of two manufacturing enterprises, one sales enterprise, and one third-party reverse logistics service provider is put forward to verify the model’s feasibility and effectiveness. This paper provides a reference for the profit distribution of the reverse logistics.


2011 ◽  
Vol 29 (6) ◽  
pp. 965-971 ◽  
Author(s):  
R. J. Boynton ◽  
M. A. Balikhin ◽  
S. A. Billings ◽  
A. S. Sharma ◽  
O. A. Amariutei

Abstract. The NARMAX OLS-ERR methodology is applied to identify a mathematical model for the dynamics of the Dst index. The NARMAX OLS-ERR algorithm, which is widely used in the field of system identification, is able to identify a mathematical model for a wide class of nonlinear systems using input and output data. Solar wind-magnetosphere coupling functions, derived from analytical or data based methods, are employed as the inputs to such models and the outputs are geomagnetic indices. The newly deduced coupling function, p1/2V4/3BTsin6(θ/2), has been implemented as an input to model the Dst dynamics. It was shown that the identified model has a very good forecasting ability, especially with the geomagnetic storms.


1997 ◽  
Vol 119 (2) ◽  
pp. 271-277 ◽  
Author(s):  
Jenq-Tzong H. Chan

In this paper, we present a modified method of data-based LQ controller design which is distinct in two major aspects: (1) one may prescribe the z-domain region within which the closed-loop poles of the LQ design are to lie, and (2) controller design is completed using only plant input and output data, and does not require explicit knowledge of a parameterized plant model.


Author(s):  
Benjamin Röhm ◽  
Reiner Anderl

Abstract The Department of Computer Integrated Design (DiK) at the TU Darmstadt deals with the Digital Twin topic from the perspective of virtual product development. A concept for the architecture of a Digital Twin was developed, which allows the administration of simulation input and output data. The concept was built under consideration of classical CAE process chains in product development. The central part of the concept is the management of simulation input and output data in a simulation data management system in the Digital Twin (SDM-DT). The SDM-DT takes over the connection between Digital Shadow and Digital Master for simulation data and simulation models. The concept is prototypically implemented. For this purpose, real product condition data were collected via a sensor network and transmitted to the Digital Shadow. The condition data were prepared and sent as a simulation input deck to the SDM-DT in the Digital Twin based on the product development results. Before the simulation data and models are simulated, there is a comparison between simulation input data with historical input data from product development. The developed and implemented concept goes beyond existing approaches and deals with a central simulation data management in Digital Twins.


Sign in / Sign up

Export Citation Format

Share Document