Science DMZ network architecture deployment and performance evaluation to large scaled data transfer efficiency

2018 ◽  
Vol 7 (3.3) ◽  
pp. 191
Author(s):  
Jong Seon Park ◽  
Seung Hae Kim ◽  
Min Ki Noh ◽  
Bu Seung Cho

Background/Objectives: Recently, it has been great issue to transfer large-scale science dada such as scientific field of high energy physic, astronomical space, super-computing simulation. To solve the transfer issue and to increase transfer efficiency, it needs a multi-dimensional approaches.Methods/Statistical analysis: To improve the transfer performance, approaches from the perspective of components such as network equipment, transmission protocol, and transmission application have been suggested. Effort to TCP congestion control algorithm and parallelism of data transfer channel are representative example to improve performance. However, the solution through the each component has a limitation in maximizing the transmission efficiency.Findings: Science DMZ is a new network architecture that can maximize transfer performance. It maximizes transfer efficiency through approach to all components, such as network equipment, dedicated network path, transfer applications, and local institute firewall policies. With these complicated components, science DMZ network architecture can greatly improve the transfer efficiency. In this paper, we design and construct a science DMZ network architecture between two organizations that utilize supercomputing resources based on KREONET and evaluate the performance.Improvements/Applications: After configuring the experiment environment, we measured network performance through iperf and file transfer performance test through SCP. Experiment result showed around 388% Improvement than that of existing method.  

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3738 ◽  
Author(s):  
Marcio Miguel ◽  
Edgard Jamhour ◽  
Marcelo Pellenz ◽  
Manoel Penna

Wireless sensor networks (WSN) are being increasingly used for data acquisition and control of remote devices. However, they present some constraints in critical and large-scale scenarios. The main limitations come from the nature of their components, such as lossy links, and devices with power supply limitations, poor processing power and limited memory. The main feature of software-defined networks (SDN) is the separation between the control plane and the data plane, making available a logically unified view of the topology in the controllers. In this way, it is possible to build network applications that take into account this unified view, which makes the SDN an alternative approach to solve the mentioned limitations. This paper presents the SD6WSN (software-defined 6LoWPAN wireless sensor network) architecture, developed to control the behavior of the data traffic in 6LoWPAN according to the SDN approach. It takes into account the specific characteristics of WSN devices, such as low data transfer rate, high latency, packet loss and low processing power, and takes advantage of the flexibility provided by flow-based forwarding, allowing the development of specific networking applications based on a unified view. We provide a detailed description of how we have implemented SD6WSN in the Contiki operating system. The new architecture is assessed in two experiments. The first considers a typical advanced metering infrastructure (AMI) network and measures the overhead of SD6WSN control messages in configurations involving different path lengths. The results indicate that the overhead introduced is not excessive, given the advantages that the SDN approach can bring. The second considers a grid-topology to evaluate the average latency of the peer-to-peer communication. It was observed that the average latency in the SD6WSN is considerably lower than that obtained with standard 6LoWPAN, showing the potential of the proposed approach.


2021 ◽  
Vol 251 ◽  
pp. 02023
Author(s):  
Maria Arsuaga-Rios ◽  
Vladimír Bahyl ◽  
Manuel Batalha ◽  
Cédric Caffy ◽  
Eric Cano ◽  
...  

The CERN IT Storage Group ensures the symbiotic development and operations of storage and data transfer services for all CERN physics data, in particular the data generated by the four LHC experiments (ALICE, ATLAS, CMS and LHCb). In order to accomplish the objectives of the next run of the LHC (Run-3), the Storage Group has undertaken a thorough analysis of the experiments’ requirements, matching them to the appropriate storage and data transfer solutions, and undergoing a rigorous programme of testing to identify and solve any issues before the start of Run-3. In this paper, we present the main challenges presented by each of the four LHC experiments. We describe their workflows, in particular how they communicate with and use the key components provided by the Storage Group: the EOS disk storage system; its archival back-end, the CERN Tape Archive (CTA); and the File Transfer Service (FTS). We also describe the validation and commissioning tests that have been undertaken and challenges overcome: the ATLAS stress tests to push their DAQ system to its limits; the CMS migration from PhEDEx to Rucio, followed by large-scale tests between EOS and CTA with the new FTS “archive monitoring” feature; the LHCb Tier-0 to Tier-1 staging tests and XRootD Third Party Copy (TPC) validation; and the erasure coding performance in ALICE.


2020 ◽  
Vol 35 (33) ◽  
pp. 2043003
Author(s):  
Arundhati Banerjee ◽  
Ivan Kisel ◽  
Maksym Zyzak

In high energy particle colliders, detectors record millions of points of data during collision events. Therefore, good data analysis depends on distinguishing collisions which produce particles of interest (signal) from those producing other particles (background). Machine learning algorithms in the current times have become popular and useful as the method of choice for such large scale data analysis. In this work, we propose and implement an artificial neural network architecture to achieve the task of identifying precisely the parent particles from all the candidates arising out of track reconstruction from collision data in the future Compressed Baryonic Matter (CBM) experiment. Our framework performs comparably to the existing computational algorithm for this task even with a simple network architecture.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


2015 ◽  
Vol 51 (91) ◽  
pp. 16381-16384 ◽  
Author(s):  
Yuelong Xin ◽  
Liya Qi ◽  
Yiwei Zhang ◽  
Zicheng Zuo ◽  
Henghui Zhou ◽  
...  

A novel organic solvent-assisted freeze-drying pathway, which can effectively protect and uniformly distribute active particles, is developed to fabricate a free-standing Li2MnO3·LiNi1/3Co1/3Mn1/3O2 (LR)/rGO electrode on a large scale.


2021 ◽  
Vol 13 (9) ◽  
pp. 5108
Author(s):  
Navin Ranjan ◽  
Sovit Bhandari ◽  
Pervez Khan ◽  
Youn-Sik Hong ◽  
Hoon Kim

The transportation system, especially the road network, is the backbone of any modern economy. However, with rapid urbanization, the congestion level has surged drastically, causing a direct effect on the quality of urban life, the environment, and the economy. In this paper, we propose (i) an inexpensive and efficient Traffic Congestion Pattern Analysis algorithm based on Image Processing, which identifies the group of roads in a network that suffers from reoccurring congestion; (ii) deep neural network architecture, formed from Convolutional Autoencoder, which learns both spatial and temporal relationships from the sequence of image data to predict the city-wide grid congestion index. Our experiment shows that both algorithms are efficient because the pattern analysis is based on the basic operations of arithmetic, whereas the prediction algorithm outperforms two other deep neural networks (Convolutional Recurrent Autoencoder and ConvLSTM) in terms of large-scale traffic network prediction performance. A case study was conducted on the dataset from Seoul city.


Author(s):  
Zhiqiang Luo ◽  
Silin Zheng ◽  
Shuo Zhao ◽  
Xin Jiao ◽  
Zongshuai Gong ◽  
...  

Benzoquinone with high theoretical capacity is anchored on N-plasma engraved porous carbon as a desirable cathode for rechargeable aqueous Zn-ion batteries. Such batteries display tremendous potential in large-scale energy storage applications.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lorenz T. Keyßer ◽  
Manfred Lenzen

Abstract1.5  °C scenarios reported by the Intergovernmental Panel on Climate Change (IPCC) rely on combinations of controversial negative emissions and unprecedented technological change, while assuming continued growth in gross domestic product (GDP). Thus far, the integrated assessment modelling community and the IPCC have neglected to consider degrowth scenarios, where economic output declines due to stringent climate mitigation. Hence, their potential to avoid reliance on negative emissions and speculative rates of technological change remains unexplored. As a first step to address this gap, this paper compares 1.5  °C degrowth scenarios with IPCC archetype scenarios, using a simplified quantitative representation of the fuel-energy-emissions nexus. Here we find that the degrowth scenarios minimize many key risks for feasibility and sustainability compared to technology-driven pathways, such as the reliance on high energy-GDP decoupling, large-scale carbon dioxide removal and large-scale and high-speed renewable energy transformation. However, substantial challenges remain regarding political feasibility. Nevertheless, degrowth pathways should be thoroughly considered.


2021 ◽  
Vol 77 (2) ◽  
pp. 98-108
Author(s):  
R. M. Churchill ◽  
C. S. Chang ◽  
J. Choi ◽  
J. Wong ◽  
S. Klasky ◽  
...  

2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


Sign in / Sign up

Export Citation Format

Share Document