shared file
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 12)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 251 ◽  
pp. 02020
Author(s):  
C. Acosta-Silva ◽  
A. Delgado Peris ◽  
J. Flix ◽  
J. Frey ◽  
J.M. Hernández ◽  
...  

CMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to the Internet. Pilot agents and payload jobs need to interact with external services from the compute nodes: access to the application software (CernVM-FS) and conditions data (Frontier), management of input and output data files (data management services), and job management (HTCondor). Finding an alternative route to these services is challenging. Seamless integration in the CMS production system without causing any operational overhead is a key goal. The case of the Barcelona Supercomputing Center (BSC), in Spain, is particularly challenging, due to its especially restrictive network setup. We describe in this paper the solutions developed within CMS to overcome these restrictions, and integrate this resource in production. Singularity containers with application software releases are built and pre-placed in the HPC facility shared file system, together with conditions data files. HTCondor has been extended to relay communications between running pilot jobs and HTCondor daemons through the HPC shared file system. This operation mode also allows piping input and output data files through the HPC file system. Results, issues encountered during the integration process, and remaining concerns are discussed.


Author(s):  
N.G. Nageswari Amma ◽  
F. Ramesh Dhanaseelan

In the cloud, various privacy-preserving and security threats on data retrieval processes exist. In this article, the authors propose an efficient method for secure privacy preserving in cloud. Initially, the shared file is encrypted using a Vigenere encryption algorithm before uploading. For creating the privacy map, the efficient classification algorithm is recommended. Here, a Modified Artificial Neural Network (MANN) is used to generate the privacy map. The weight value of the neural network is optimized using a Particle Swarm Optimization (PSO) algorithm. While retrieving files initially, the authorization of the person is verified by providing basic information, then the OTP of the respective files is verified. Since the user can retrieve the files only after authorization, verification and decryption of the files is highly secured and privacy is preserved. The performance of the proposed method is evaluated in terms of time and accuracy.


Author(s):  
K. Pubudu Nuwnthika Jayasena ◽  
Poddivila Marage Nimasha Ruwandi Madhunamali

The central problem to be addressed in this research is to investigate how blockchain technology can be used in today's food supply chains to deliver greater traceability of assets. The aim is to create a blockchain model in the dairy supply chain that can be implemented across any food supply chains and present the advantages and limitations in its implementation. Blockchain allows monitoring all types of transactions in a supply chain more safely and transparently. Acceptance of blockchain in the supply chain and logistics is slow right now because of related risks and the lack of demonstrable models. The proposed solution removes the need for a trusted centralized authority, intermediaries and provides records of transactions, improving high integrity, reliability, and security efficiency and protection. All transactions are registered and maintained in the unchangeable database of the blockchain with access to a shared file network.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
June-Hyung Kim ◽  
Youngjae Kim ◽  
Safdar Jamil ◽  
Chang-Gyu Lee ◽  
Sungyong Park
Keyword(s):  

2020 ◽  
Vol 245 ◽  
pp. 09010
Author(s):  
Michal Svatoš ◽  
Jiří Chudoba ◽  
Petr Vokáč

The distributed computing system of the ATLAS experiment at LHC is allowed to opportunistically use resources at the Czech national HPC center IT4Innovations in Ostrava. The jobs are submitted via an ARC Compute Element (ARC-CE) installed at the grid site in Prague. Scripts and input files are shared between the ARC-CE and a shared file system located at the HPC centre via sshfs. This basic submission system has worked there since the end of 2017. Several improvements were made to increase the amount of resource that ATLAS can use. The most significant change was the migration of the submission system to enable pre-emptable jobs, to adapt to the HPC management’s decision to start pre-empting opportunistic jobs. Another improvement of the submission system was related to the sshfs connection which seemed to be a limiting factor of the system. Now, the submission system consists of several ARC-CE machines. Also, various parameters of sshfs were tested in an attempt to increase throughput. As a result of the improvements, the utilisation of the Czech national HPC center by the ATLAS distributed computing increased.


2020 ◽  
Vol 245 ◽  
pp. 09007
Author(s):  
Carles Acosta-Silva ◽  
Antonio Delgado Peris ◽  
José Flix Molina ◽  
Jaime Frey ◽  
José M. Hernández ◽  
...  

In view of the increasing computing needs for the HL-LHC era, the LHC experiments are exploring new ways to access, integrate and use non-Grid compute resources. Accessing and making efficient use of Cloud and High Performance Computing (HPC) resources present a diversity of challenges for the CMS experiment. In particular, network limitations at the compute nodes in HPC centers prevent CMS pilot jobs to connect to its central HTCondor pool in order to receive payload jobs to be executed. To cope with this limitation, new features have been developed in both HTCondor and the CMS resource acquisition and workload management infrastructure. In this novel approach, a bridge node is set up outside the HPC center and the communications between HTCondor daemons are relayed through a shared file system. This conforms the basis of the CMS strategy to enable the exploitation of the Barcelona Supercomputing Center (BSC) resources, the main Spanish HPC site. CMS payloads are claimed by HTCondor condor_startd daemons running at the nearby PIC Tier-1 center and routed to BSC compute nodes through the bridge. This fully enables the connectivity of CMS HTCondor-based central infrastructure to BSC resources via the PIC HTCondor pool. Other challenges include building custom singularity images with CMS software releases, bringing conditions data to payload jobs, and custom data handling between BSC and PIC. This report describes the initial technical prototype, its deployment and tests, and future steps. A key aspect of the technique described in this contribution is that it could be universally employed in similar network-restrictive HPC environments elsewhere.


2019 ◽  
Vol 16 (12) ◽  
pp. 5067-5072
Author(s):  
Shakti Arora ◽  
Surjeet Dalal

Cloud has become popular in today’s environment of IT industry where all types of services whether hardware, software, or storage can be availed at one place and can be used efficiently. When we use hardware and software then there is very less chance of risk but when we deal with storage then we are storing our personal data on the cloud which is not transparent to user. In proposed model we introduced hybrid technique which provides security and assurance beyond Service Level Agreement. We proposed a strong integrity verification mechanism at the time of recovering a file/data. Hardness/strength of the key or the shared file is increased up to maximum level i.e., 7.9. Integrity of the proposed system is compared with the standard cloud’s integrity and approximately we gain 60% higher level of integrity other than the standard cloud nodes.


2019 ◽  
Author(s):  
Nikolas I. Krieger ◽  
Adam T. Perzynski ◽  
Jarrod E. Dalton

AbstractThe contemporary scientific community places a growing emphasis on the reproducibility of research. The projects R package is a free, open-source endeavor created in the interest of facilitating reproducible research workflows. It adds to existing software tools for reproducible research and introduces several practical features that are helpful for scientists and their collaborative research teams. For each individual project, it supplies an intuitive framework for storing raw and cleaned study data sets, and provides script templates for protocol creation, data cleaning, data analysis and manuscript development. Internal databases of project and author information are generated and displayed, and manuscript title pages containing author lists and their affiliations are automatically generated from the internal database. File management tools allow teams to organize multiple projects. When used on a shared file system, multiple researchers can harmoniously contribute to the same project in a less punctuated manner, reducing the frequency of misunderstandings and the need for status updates.


2019 ◽  
Vol 214 ◽  
pp. 04005
Author(s):  
Jan Knedlik ◽  
Paul Kramp ◽  
Kilian Schwarz ◽  
Thorsten Kollegger

XRootD† has been established as a standard for WAN data access in HEP and HENP. Site specific features, like those existing at GSI, have historically been hard to implement with native methods. XRootD allows a custom replacement of basic functionality for native XRootD functions through the use of plug-ins. XRootD clients allow this since version 4.0. In this contribution, our XRootD based developments motivated by the use in the current ALICE Tier 2 Centre at GSI and the upcoming ALICE Analysis Facility will be shown. Among other things, an XRootD redirector plug-in which redirects local clients directly to a shared file system, as well as the needed changes to the XRootD base code, which are publicly available since XRootD version 4.8.0, will be presented. Furthermore, a prototype for an XRootD based disk caching system for opportunistic resources has been developed.


Sign in / Sign up

Export Citation Format

Share Document