automated data processing
Recently Published Documents


TOTAL DOCUMENTS

152
(FIVE YEARS 51)

H-INDEX

16
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Joel Ramon Chacon ◽  
Peter Dabrowski

Objectives/Scope The production technology working environment of an oil brownfield is usually an inconsistent collection of tools and spreadsheets. In this paper, we will explore Wintershall Dea's digitalisation journey from a patchwork of tools and spreadsheets to a unified corporate Production Technology Workbench (PTW) solution starting from the replacement of an existing and ageing tool on an asset on the Norwegian continental shelf and ending by incorporating the requirements of other assets from Wintershall Dea's diverse and geographically dispersed portfolio. Methods, Procedures, Processes The project started by selecting the low-code application platform suitable to be used as the basis for the journey. After a proof-of-concept stage, an Agile project was launched owned by the asset and with a geographically dispersed Development Team conformed by Wintershall Dea's Product Owners, IT/OT experts, UX consultants and Eigen's scrum master and Development Team. After the delivery of the MVP, a second Product Owner was incorporated from a second asset. The Agile project continued to deliver on enhanced functionality and requirements that would most benefit both assets. Results, Observations, Conclusions The original production system calculations and workflows are vital for the asset. However, such patchworks are not easy to work with and complex to maintain or change. This had a negative effect on the efficiency as work is time-consuming and cumbersome. Well anomalies were often detected by actively looking for them daily in various plots, reports and platforms, and therefore the detection and response time to production events was delayed. A Production Technology dashboard with built-in / automated data processing for standard tasks provides engineers with the required transparency of data to identify issues and pain-points in a timely manner. This helps engineers to proactively intervene to mitigate unplanned losses and downtime, reducing the amount of deferred production. Investment in a corporate-wide unified (standard UX) platform, will help engineers when starting new assignments to spot issues easier and quicker independently of the asset they are assigned to. But beyond a standardization, each engineer needs to be able to create individual workflows (for effects such as scaling, slugging, sand etc.) for their needs by means of the self-service capabilities of the technology. Also, the quick access to frequently used and relevant data could be accessed through one platform, making everyday life of the production engineer more efficient and smoother. Over the timeframe of 15+ Sprints the Product Owners refined and re-defined the exact functionality they would like to see delivered. Novel/Additive Information The PTW concept seeks to minimise the time that engineers require to learn the tool and use it to inspect, analyse, and make decisions to optimise the production of the field. This is one of Wintershall Dea's first projects executed following Agile, using a geographically dispersed team, during the restrictions imposed by the pandemic. The multi-Product-Owner project approach is a novel way to govern the evolution of the tool to suit multiple stakeholders. In comparison to a E&P typical waterfall project management approach, the application of Scrum really showed added value in reducing risk early on, increasing visibility and transparency and adapting to the customer's needs (production engineers) throughout the process.


2021 ◽  
pp. 117-131
Author(s):  
Olha VYSOCHAN ◽  
Oleh VYSOCHAN ◽  
Vasyl HYK

The work is devoted to the issue of segmentation of charitable organizations for structuring the sector of non-profit organizations of Ukraine using cluster analysis tools using software R for automated data processing. The four-cluster and five-cluster models were constructed using the K-means method, the suitability for clustering of which was checked using the Hopkins’ Index (H statistics). The developed four-cluster model demonstrated a significant level of validity in terms of correspondence between data and the stability of their structure. The basic indicators of financial and economic activity of charitable organizations were used as criteria for clustering: the number of staff, charitable assistance received and funds spent on the maintenance of the organization in the reporting period. It was found that the clusters of charitable organizations of Ukraine differ in the scale of activity, the number of funds raised, the number of costs for their own maintenance and the relationship between these indicators. The study demonstrated the existence in Ukraine of the most influential cluster of local charities that address social issues exclusively at the regional level, due to the small financial resources involved to support their activities. Such organizations are system-creating for the entire nonprofit sector in Ukraine, their importance is manifested in the most rapid response to the needs of recipients through the implementation of small charitable projects.


2021 ◽  
Vol 13 (22) ◽  
pp. 4547
Author(s):  
Saüc Abadal ◽  
Luis Salgueiro ◽  
Javier Marcello ◽  
Verónica Vilaplana

There is a growing interest in the development of automated data processing workflows that provide reliable, high spatial resolution land cover maps. However, high-resolution remote sensing images are not always affordable. Taking into account the free availability of Sentinel-2 satellite data, in this work we propose a deep learning model to generate high-resolution segmentation maps from low-resolution inputs in a multi-task approach. Our proposal is a dual-network model with two branches: the Single Image Super-Resolution branch, that reconstructs a high-resolution version of the input image, and the Semantic Segmentation Super-Resolution branch, that predicts a high-resolution segmentation map with a scaling factor of 2. We performed several experiments to find the best architecture, training and testing on a subset of the S2GLC 2017 dataset. We based our model on the DeepLabV3+ architecture, enhancing the model and achieving an improvement of 5% on IoU and almost 10% on the recall score. Furthermore, our qualitative results demonstrate the effectiveness and usefulness of the proposed approach.


2021 ◽  
Vol 7 (2) ◽  
pp. 299-302
Author(s):  
Patricio Fuentealba ◽  
Rutuja Salvi ◽  
Jasmin Henze ◽  
Anja Burmann ◽  
Axel Boese ◽  
...  

Abstract Auscultation methods allow a non-invasive diagnosis of cardiovascular diseases like atherosclerosis based on blood flow sounds of the carotid arteries. Since this process is highly dependent on the clinician’s experience, it is of great interest to develop automated data processing techniques for objective assessment. We have recently proposed a computerassisted auscultation system that we use to acquire carotid blood flow sounds. In this work, we present an approach for detecting artifacts within the blood flow sound caused by swallowing or coughing events. For this purpose, we first decompose the signal using a discrete wavelet transform (DTW). Then, we compute an energy ratio between the DWT scales associated with the signal information with and without artifacts using a sliding window of 1 s length. Evaluation based on Kruskal-Wallis and Wilcoxon rank-sum tests shows a statistically significant difference (p-value<.0001) between the signal with and without artifact. Therefore, the proposed method allows the identification of the studied signal artifacts.


2021 ◽  
Author(s):  
Mark Kuster

Demand for transparent digital measurement data exchange and processing continues to grow in the metrology industry. Automated data processing, however, requires metadata to adequately describe and identify the information content. In metrology, the measurand description comprises the fundamental metadata that associates meaning with measurement data. Measurement information exists in many metrological documents, including calibration certificates and instrument specifications, but in the international quality infrastructure (IQI), CMC1 descriptions define all approved and accredited measurands. These CMCs appear atthe IQI’s apex in the BIPM2 KCDB3 and throughout the IQI in official accreditation scopes and unaccredited measurement capability statements. In the current state of the art, metrology experts develop CMCs primarily as free-form text that often lacks full information, requires subject-matter experts to interpret correctly, hinders machine processing, and does not lend itself to contextual or semantic search on CMC technical characteristics of interest: uncertainty, range, or even the specific measurand. This paper describes a measurand taxonomy structure suitable for use as CMC metadata and provides a procedure and examples for developing the metadata taxons to standardize CMCs in human- and machine-readable formats.


Author(s):  
Denise Wolrab ◽  
Eva Cífková ◽  
Pavel Čáň ◽  
Miroslav Lísa ◽  
Ondřej Peterka ◽  
...  

Abstract Summary We present the LipidQuant 1.0 tool for automated data processing workflows in lipidomic quantitation based on lipid class separation coupled with high-resolution mass spectrometry. Lipid class separation workflows, such as hydrophilic interaction liquid chromatography or supercritical fluid chromatography, should be preferred in lipidomic quantitation due to the coionization of lipid class internal standards with analytes from the same class. The individual steps in the LipidQuant workflow are explained, including lipid identification, quantitation, isotopic correction, and reporting results. We show the application of LipidQuant data processing to a small cohort of human serum samples. Availability and implementation The LipidQuant 1.0 is freely available at Zenodo https://doi.org/10.5281/zenodo.5151201 and https://holcapek.upce.cz/lipidquant.php. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
pp. 46-50
Author(s):  
М. KONIUSH

This article reveals the problem of the importance and necessity of modernization and adaptation of the right to non-discrimination in the era of Big data and automated data processing systems. A number of approaches that could potentially facilitate the solution of this problem is proposed, in particular with regard to pre- and post-processing of data. The concept of “code as law” is analyzed, which helps to understand the impact of digital technologies on discrimination and to find new ways to combat this phenomenon.


Sign in / Sign up

Export Citation Format

Share Document