High End Inspection by Filmless Radiography on LSAW Large Diameter Pipes From EUROPIPE

Author(s):  
Thomas Kersting ◽  
Andreas Liessem ◽  
Ludwig Oesterlein ◽  
Stefan Schuster ◽  
Norbert Schoenartz

Pipes for transportation of combustible media are subject to the most severe safety requirements. In order to guarantee best performance during construction and long term services the level of quality and the productivity are continuously increased. After many years of experience with the Filmless-Radiography (FLORAD) for internal process control (detection of typical weld seam defects like slag and pores), EUROPIPE has eventually invested in the digital X-ray inspection technology for the final release. Therewith the classic X-ray film has been replaced, the environmental impact due to chemicals reduced and the complete NDT process enhanced. By the availability of safe digital images via computer network it became in addition much easier for third party inspectors to monitor the release process. Furthermore the use of a professional data storage system guarantees a safe and traceable long term archival storage with a quick access to all data within minutes. The new installation consists of two separate X-ray chambers. In each chamber two digital detector arrays (DDA) and two X-ray tubes are installed to inspect the weld seam at the pipe ends and areas having indications from the automated ultrasonic testing. EUROPIPE is the first company which has implemented this technology in a highly automated serial production of large diameter pipes.

2021 ◽  
Author(s):  
Min Li ◽  
Junbiao Dai ◽  
Qingshan Jiang ◽  
Yang Wang

Abstract Current research on DNA storage usually focuses on the improvement of storage density with reduced gene synthesis cost by developing effective encoding and decoding schemes while lacking the consideration on the uncertainty in ultra long-term data storage and retention. Consequently, the current DNA storage systems are often not self-containment, implying that they have to resort to external tools for the restoration of the stored gene data. This may result in high risks in data loss since the required tools might not be available due to the high uncertainty in far future. To address this issue, we propose in this paper a self-contained DNA storage system that can make self-explanatory to its stored data without relying on any external tools. To this end, we design a specific DNA file format whereby a separate storage scheme is developed to reduce the data redundancy while an effective indexing is designed for random read operations to the stored data file. We verified through experimental data that the proposed self-contained and self-explanatory method can not only get rid of the reliance on external tools for data restoration but also minimize the data redundancy brought about when the amount of data to be stored reaches a certain scale.


Author(s):  
L. V. Rudikova ◽  
V. V. Danilchik

Nowadays, it is considerable to develop a general concept and implement a system for storing and analyzing data related to socio-economic displacements of people. The population movement, related to long-term and short-term migrations, has an increasing nature, which directly affects the various fields of activity in a single country and the world community as a whole. The proposed article describes the subject area associated with socio-economic displacements of people, the key features of internal and external migrations are noted. Based on the subject area, the general architecture of the universal system of data storage and processing is proposed, which is based on the client-server architecture. A fragment of the data model, associated with the accumulation of data from external sources, is provided. General approaches of algorithms and data structures usage are proposed. The system architecture is described with the possibility of scaling both vertical and horizontal.The proposed system organizes the process of searching for data and filling the database from third-party sources. To do this, a module for collecting and converting information from third-party Internet sources and sending them to the database has developed. In the paper is noted the feature of the client application, which provides a convenient interface for analyzing data in the form of diagrams, graphs, maps, etc. The system is intended for various users interested in analyzing economic and social transfers, for example, to tourist organizations wishing to obtain statistics for a certain time, to airlines which could plan flights in one direction or another, as well as for state structures with the purpose of analyzing the migration flows of the population and developing appropriate strategy for their regulation.


2019 ◽  
Vol 13 (02) ◽  
pp. 207-227 ◽  
Author(s):  
Norman Köster ◽  
Sebastian Wrede ◽  
Philipp Cimiano

Efficient storage and querying of long-term human–robot interaction data requires application developers to have an in-depth understanding of the involved domains. Creating syntactically and semantically correct queries in the development process is an error prone task which can immensely impact the interaction experience of humans with robots and artificial agents. To address this issue, we present and evaluate a model-driven software development approach to create a long-term storage system to be used in highly interactive HRI scenarios. We created multiple domain-specific languages that allow us to model the domain and seamlessly embed its concepts into a query language. Along with corresponding model-to-model and model-to-text transformations, we generate a fully integrated workbench facilitating data storage and retrieval. It supports developers in the query design process and allows in-tool query execution without the need to have prior in-depth knowledge of the domain. We evaluated our work in an extensive user study and can show that the generated tool yields multiple advantages compared to the usual query design approach.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Min Li ◽  
Jiashu Wu ◽  
Junbiao Dai ◽  
Qingshan Jiang ◽  
Qiang Qu ◽  
...  

AbstractCurrent research on DNA storage usually focuses on the improvement of storage density by developing effective encoding and decoding schemes while lacking the consideration on the uncertainty in ultra-long-term data storage and retention. Consequently, the current DNA storage systems are often not self-contained, implying that they have to resort to external tools for the restoration of the stored DNA data. This may result in high risks in data loss since the required tools might not be available due to the high uncertainty in far future. To address this issue, we propose in this paper a self-contained DNA storage system that can bring self-explanatory to its stored data without relying on any external tool. To this end, we design a specific DNA file format whereby a separate storage scheme is developed to reduce the data redundancy while an effective indexing is designed for random read operations to the stored data file. We verified through experimental data that the proposed self-contained and self-explanatory method can not only get rid of the reliance on external tools for data restoration but also minimise the data redundancy brought about when the amount of data to be stored reaches a certain scale.


Author(s):  
Kamlesh Sharma* ◽  
Nidhi Garg

Exercising a collection of similar numerous easy to get sources and resources over the internet is termed as Cloud Computing A Cloud storage system is basically a storage system over a large scale that consist of many independent storage servers. During recent years a huge changes and adoption is seen in cloud computing so security has become one of the major concerns in it. As Cloud computing works on third party system so security concern is there not only for customers but also for service providers. In this paper we have discussed about Cryptography i.e., encrypting messages into certain forms, it’s algorithms including symmetric and asymmetric algorithm and hashing, its architecture, and advantages of cryptography.


2021 ◽  
Author(s):  
Zihui Yan ◽  
Cong Liang

In recent years, DNA-based systems have become a promising medium for long-term data storage. There are two layers of errors in DNA-based storage systems. The first is the dropouts of the DNA strands, which has been characterized in the shuffling-sampling channel. The second is insertions, deletions, and substitutions of nucleotides in individual DNA molecules. In this paper, we describe a DNA noisy synchronization error channel to characterize the errors in individual DNA molecules. We derive non-trivial lower and upper capacity bounds of the DNA noisy synchronization error channel based on information theory. By cascading these two channels, we provide theoretical capacity limits of the DNA storage system. These results reaffirm that DNA is a reliable storage medium with high storage density potential.


2020 ◽  
Author(s):  
James L. Banal ◽  
Tyson R. Shepherd ◽  
Joseph Berleant ◽  
Hellen Huang ◽  
Miguel Reyes ◽  
...  

ABSTRACTDNA is an ultra-high-density storage medium that could meet exponentially growing worldwide demand for archival data storage if DNA synthesis costs declined sufficiently and random access of files within exabyte-to-yottabyte-scale DNA data pools were feasible. To overcome the second barrier, here we encapsulate data-encoding DNA file sequences within impervious silica capsules that are surface-labeled with single-stranded DNA barcodes. Barcodes are chosen to represent file metadata, enabling efficient and direct selection of sets of files with Boolean logic. We demonstrate random access of image files from an image database using fluorescence sorting with selection sensitivity of 1 in 106 files, which thereby enables 1 in 106N per N optical channels. Our strategy thereby offers retrieval of random file subsets from exabyte and larger-scale long-term DNA file storage databases, offering a scalable solution for random-access of archival files in massive molecular datasets.


2020 ◽  
Vol 15 (1) ◽  
pp. 15
Author(s):  
Felix Bach ◽  
Björn Schembera ◽  
Jos Van Wezel

Research data as the true valuable good in science must be saved and subsequently kept findable, accessible and reusable for reasons of proper scientific conduct for a time span of several years. However, managing long-term storage of research data is a burden for institutes and researchers. Because of the sheer size and the required retention time apt storage providers are hard to find. Aiming to solve this puzzle, the bwDataArchive project started development of a long-term research data archive that is reliable, cost effective and able store multiple petabytes of data. The hardware consists of data storage on magnetic tape, interfaced with disk caches and nodes for data movement and access. On the software side, the High Performance Storage System (HPSS) was chosen for its proven ability to reliably store huge amounts of data. However, the implementation of bwDataArchive is not dependant on HPSS. For authentication the bwDataArchive is integrated into the federated identity management for educational institutions in the State of Baden-Württemberg in Germany. The archive features data protection by means of a dual copy at two distinct locations on different tape technologies, data accessibility by common storage protocols, data retention assurance for more than ten years, data preservation with checksums, and data management capabilities supported by a flexible directory structure allowing sharing and publication. As of September 2019, the bwDataArchive holds over 9 PB and 90 million files and sees a constant increase in usage and users from many communities.


Cryptography ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 37
Author(s):  
Noha E. El-Attar ◽  
Doaa S. El-Morshedy ◽  
Wael A. Awad

The need for cloud storage grows day after day due to its reliable and scalable nature. The storage and maintenance of user data at a remote location are severe issues due to the difficulty of ensuring data privacy and confidentiality. Some security issues within current cloud systems are managed by a cloud third party (CTP), who may turn into an untrustworthy insider part. This paper presents an automated Encryption/Decryption System for Cloud Data Storage (AEDS) based on hybrid cryptography algorithms to improve data security and ensure confidentiality without interference from CTP. Three encryption approaches are implemented to achieve high performance and efficiency: Automated Sequential Cryptography (ASC), Automated Random Cryptography (ARC), and Improved Automated Random Cryptography (IARC) for data blocks. In the IARC approach, we have presented a novel encryption strategy by converting the static S-box in the AES algorithm to a dynamic S-box. Furthermore, the algorithms RSA and Twofish are used to encrypt the generated keys to enhance privacy issues. We have evaluated our approaches with other existing symmetrical key algorithms such as DES, 3DES, and RC2. Although the two proposed ARC and ASC approaches are more complicated, they take less time than DES, DES3, and RC2 in processing the data and obtaining better performance in data throughput and confidentiality. ARC outperformed all of the other algorithms in the comparison. The ARC’s encrypting process has saved time compared with other algorithms, where its encryption time has been recorded as 22.58 s for a 500 MB file size, while the DES, 3DES, and RC2 have completed the encryption process in 44.43, 135.65, and 66.91 s, respectively, for the same file size. Nevertheless, when the file sizes increased to 2.2 GB, the ASC proved its efficiency in completing the encryption process in less time.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Sujan Saha ◽  
Sukumar Mandal

Purpose These projects aim to improve library services for users in the future by combining Link Open Data (LOD) technology with data visualization. It displays and analyses search results in an intuitive manner. These services are enhanced by integrating various LOD technologies into the authority control system. Design/methodology/approach The technology known as LOD is used to access, recycle, share, exchange and disseminate information, among other things. The applicability of Linked Data technologies for the development of library information services is evaluated in this study. Findings Apache Hadoop is used for rapidly storing and processing massive Linked Data data sets. Apache Spark is a free and open-source data processing tool. Hive is a SQL-based data warehouse that enables data scientists to write, read and manage petabytes of data. Originality/value The distributed large data storage system Apache HBase does not use SQL. This study’s goal is to search the geographic, authority and bibliographic databases for relevant links found on various websites. When data items are linked together, all of the data bits are linked together as well. The study observed and evaluated the tools and processes and recorded each data item’s URL. As a result, data can be combined across silos, enhanced by third-party data sources and contextualized.


Sign in / Sign up

Export Citation Format

Share Document