engineering best practices
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 8)

H-INDEX

3
(FIVE YEARS 0)

2021 ◽  
Author(s):  
F. Purnawarman

Mubadala Petroleum (pipeline operator) responded to a pig stuck in a 312 km subsea gas pipeline. A bi-di pig was found to be stuck and after being rescued, the bi-di pig discs were found to be dented. An investigation and assessment are conducted to found what causes pig stuck, to solve the issue of a pig stuck, and to execute all the processes during pipeline operation as effectively and safest way as possible. The investigation started by defining what makes dented bi-di pigs, is an object inside the pipeline or pipeline deformation. A series of conformity techniques are applied by calculation to predict exact pig stuck location, using the backpressure method. A series of inspections were conducted from simple to complex way. From a smart pig that resulted from pig damage to side-scan sonar (SSS) inspection, and visual inspection at predicted location due to cost and time limit. The result is the investigation and assessment process to find dent location within 312 km seabed are executed in a time effective and cost reduced way, under 2 years for all activity for technical process and field execution duration. The sequence also escalating from lowest cost and easiest methods (engineering calculation) to the highest and more complex methods and cost (inspection and survey). The accurate result of prediction confirmed by the dent location and damages at KP 111.89 and 30 meters water depth. The investigation methodology also complies with the requirement of regulation, company specs, standard/code, and engineering best practices. The benefits of this paper are as a reference to conduct series of pipeline damage investigations for a long-distance and remote subsea pipeline. The investigation sequence can apply to many cases with accurate prediction to reduce investigation cost, time, complexity, and risk.


2021 ◽  
Author(s):  
Alexander L.R. Lubbock ◽  
Carlos F. Lopez

AbstractComputational modeling has become an established technique to encode mathematical representations of cellular processes and gain mechanistic insights that drive testable predictions. These models are often constructed using graphical user interfaces or domain-specific languages, with SBML used for interchange. Models are typically simulated, calibrated, and analyzed either within a single application, or using import and export from various tools. Here, we describe a programmatic modeling paradigm, in which modeling is augmented with best practices from software engineering. We focus on Python - a popular, user-friendly programming language with a large scientific package ecosystem. Models themselves can be encoded as programs, adding benefits such as modularity, testing, and automated documentation generators while still being exportable to SBML. Automated version control and testing ensures models and their modules have expected properties and behavior. Programmatic modeling is a key technology to enable collaborative model development and enhance dissemination, transparency, and reproducibility.HighlightsProgrammatic modeling combines computational modeling with software engineering best practices.An executable model enables users to leverage all available resources from the language.Community benefits include improved collaboration, reusability, and reproducibility.Python has multiple modeling frameworks with a broad, active scientific ecosystem.


2021 ◽  
Author(s):  
Rastko Ciric ◽  
Romy Lorenz ◽  
William Thompson ◽  
Mathias Goncalves ◽  
Eilidh MacNicol ◽  
...  

Abstract Neuroimaging templates and corresponding atlases play a central role in experimental workflows and are the foundation for reporting standardised results. The proliferation of templates and atlases is one relevant source of methodological variability across studies, which has been recently brought to attention as an important challenge to reproducibility in neuroscience. Unclear nomenclature, an overabundance of template variants and options, inadequate provenance tracking and maintenance, and poor concordance between atlases introduce further unreliability into reported results. We introduce TemplateFlow, a cloud-based repository of human and nonhuman imaging templates paired with a client application for programmatically accessing resources. TemplateFlow is designed to be extensible, providing a transparent pathway for researchers to contribute and vet templates and their associated atlases. Following software engineering best practices, TemplateFlow leverages technologies for unambiguous resource identification, data management, versioning and synchronisation, programmatic extensibility, and continuous integration. By equipping researchers with a robust resource for using and evaluating templates, TemplateFlow will contribute to increasing the reliability of neuroimaging results.


2021 ◽  
Author(s):  
R Ciric ◽  
R Lorenz ◽  
WH Thompson ◽  
M Goncalves ◽  
E MacNicol ◽  
...  

Neuroimaging templates and corresponding atlases play a central role in experimental workflows and are the foundation for reporting standardised results. The proliferation of templates and atlases is one relevant source of methodological variability across studies, which has been recently brought to attention as an important challenge to reproducibility in neuroscience. Unclear nomenclature, an overabundance of template variants and options, inadequate provenance tracking and maintenance, and poor concordance between atlases introduce further unreliability into reported results. We introduce TemplateFlow, a cloud-based repository of human and nonhuman imaging templates paired with a client application for programmatically accessing resources. TemplateFlow is designed to be extensible, providing a transparent pathway for researchers to contribute and vet templates and their associated atlases. Following software engineering best practices, TemplateFlow leverages technologies for unambiguous resource identification, data management, versioning and synchronisation, programmatic extensibility, and continuous integration. By equipping researchers with a robust resource for using and evaluating templates, TemplateFlow will contribute to increasing the reliability of neuroimaging results.


2020 ◽  
Author(s):  
Raul Bardaji ◽  
Jaume Piera ◽  
Juanjo Dañobeitia ◽  
Ivan Rodero

<p>In marine sciences, the way in which many research groups work is changing as scientists use published data to complement their field campaign data online, thanks to the large increase in the number of open access observations. Many institutions are making great efforts to provide the data following FAIR principles (findability, accessibility, interoperability, and reusability) and are bringing together interdisciplinary teams of data scientists and data engineers.</p><p>There are different platforms for downloading marine and oceanographic data and many libraries to analyze data. However, the reality is that scientists continue to have difficulty finding the data they need. On many occasions, data platforms provide information about the metadata, but they do not show any underlying graph of the data that can be downloaded. Sometimes, scientists cannot download only the data parameters of interest and have to download huge amounts of data with other not useful parameters for their studies. On other occasions, the platform allows to download the data parameters of interest but offers the time-series data as many files, and it is the scientist who has to join the pieces of data into a single dataset to be analyzed correctly. EMSO ERIC is developing a data service that helps reduce the burden of scientists to search and acquire data as much as possible.</p><p> </p><p>We present the EMSO ERIC DataLab web application, which provides users with capabilities to preview harmonized data from the EMSO ERIC observatories, perform some basic data analyses, create or modify datasets, and download them. Use case scenarios of the DataLab include the creation of a NetCDF file with time-series information across EMSO ERIC observatories.</p><p>The DataLab has been developed using engineering best practices and trend technologies for big data management, including specialized Python libraries for web environments and oceanographic data analysis, such as Plotly, Dash, Flask, and the Module for Ocean Observatory Data Analysis (MOODA).</p>


Sign in / Sign up

Export Citation Format

Share Document