Direct Time Domain Simulations for a FPSO Tandem Offloading Operation

Author(s):  
Bonjun Koo ◽  
Manoj Jegannathan ◽  
Johyun Kyoung ◽  
Ho-Joon Lim

Abstract In this study, direct time domain offloading simulations are conducted without condensing the metocean data using High Performance Computing (HPC). With rapidly growing computing power, from increased CPU speeds and parallel processing capability, the direct time domain simulation for offloading analyses has become a practical option. For instance, 3-hour time domain simulations, covering the entire service life (e.g. 100,000 simulations for 35 years) of a floating platform, can now be conducted within a day. The simulation results provide realistic offloading operational time windows which consider both offloading operation sequence (i.e. berthing, connection, offloading duration and disconnection) and required criteria (i.e. relative responses, loads on hawser and flow line, etc.). The direct time domain offloading analyses improve the prediction of offloading operability, the sizing of the FPSO tank capacity, and the shuttle tanker selection. In addition, this method enables accurate evaluations of the economic feasibility for field development using FPSOs.

2020 ◽  
Author(s):  
Hamza Ali Imran

Applications like Big Data, Machine Learning, Deep Learning and even other Engineering and Scientific research requires a lot of computing power; making High-Performance Computing (HPC) an important field. But access to Supercomputers is out of range from the majority. Nowadays Supercomputers are actually clusters of computers usually made-up of commodity hardware. Such clusters are called Beowulf Clusters. The history of which goes back to 1994 when NASA built a Supercomputer by creating a cluster of commodity hardware. In recent times a lot of effort has been done in making HPC Clusters of even single board computers (SBCs). Although the creation of clusters of commodity hardware is possible but is a cumbersome task. Moreover, the maintenance of such systems is also difficult and requires special expertise and time. The concept of cloud is to provide on-demand resources that can be services, platform or even infrastructure and this is done by sharing a big resource pool. Cloud computing has resolved problems like maintenance of hardware and requirement of having expertise in networking etc. An effort is made of bringing concepts from cloud computing to HPC in order to get benefits of cloud. The main target is to create a system which can develop a capability of providing computing power as a service which to further be referred to as Supercomputer as a service. A prototype was made using Raspberry Pi (RPi) 3B and 3B+ Single Board Computers. The reason for using RPi boards was increasing popularity of ARM processors in the field of HPC


Author(s):  
Jeremy Cohen ◽  
John Darlington

As computing power continues to grow and high performance computing use increases, ever bigger scientific experiments and tasks can be carried out. However, the management of the computing power necessary to support these ever growing tasks is getting more and more difficult. Increased power consumption, heat generation and space costs for the larger numbers of resources that are required can make local hosting of resources too expensive. Emergence of utility computing platforms offers a solution. We present our recent work to develop an update to our computational markets environment for support of application deployment and brokering across multiple utility computing environments. We develop a prototype to demonstrate the potential benefits of such an environment and look at the longer term changes in the use of computing that might be enabled by such developments.


Symmetry ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1029
Author(s):  
Anabi Hilary Kelechi ◽  
Mohammed H. Alsharif ◽  
Okpe Jonah Bameyi ◽  
Paul Joan Ezra ◽  
Iorshase Kator Joseph ◽  
...  

Power-consuming entities such as high performance computing (HPC) sites and large data centers are growing with the advance in information technology. In business, HPC is used to enhance the product delivery time, reduce the production cost, and decrease the time it takes to develop a new product. Today’s high level of computing power from supercomputers comes at the expense of consuming large amounts of electric power. It is necessary to consider reducing the energy required by the computing systems and the resources needed to operate these computing systems to minimize the energy utilized by HPC entities. The database could improve system energy efficiency by sampling all the components’ power consumption at regular intervals and the information contained in a database. The information stored in the database will serve as input data for energy-efficiency optimization. More so, device workload information and different usage metrics are stored in the database. There has been strong momentum in the area of artificial intelligence (AI) as a tool for optimizing and processing automation by leveraging on already existing information. This paper discusses ideas for improving energy efficiency for HPC using AI.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2251
Author(s):  
Giuseppe Di Modica ◽  
Luca Evangelisti ◽  
Luca Foschini ◽  
Assimo Maris ◽  
Sonia Melandri

In the last years, the development of broadband chirped-pulse Fourier transform microwave spectrometers has revolutionized the field of rotational spectroscopy. Currently, it is possible to experimentally obtain a large quantity of spectra that would be difficult to analyze manually due to two main reasons. First, recent instruments allow obtaining a considerable amount of data in very short times, and second, it is possible to analyze complex mixtures of molecules that all contribute to the density of the spectra. AUTOFIT is a spectral assignment software application that was developed in 2013 to support and facilitate the analysis. Notwithstanding the benefits AUTOFIT brings in terms of automation of the analysis of the accumulated data, it still does not guarantee a good performance in terms of execution time because it leverages the computing power of a single computing machine. To cater to this requirement, we developed a parallel version of AUTOFIT, called HS-AUTOFIT, capable of running on high-performance computing (HPC) clusters to shorten the time to explore and analyze spectral big data. In this paper, we report some tests conducted on a real HPC cluster aimed at providing a quantitative assessment of HS-AUTOFIT’s scaling capabilities in a multi-node computing context. The collected results demonstrate the benefits of the proposed approach in terms of a significant reduction in computing time.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Álvaro Brandón ◽  
María S. Pérez ◽  
Jesus Montes ◽  
Alberto Sanchez

Monitoring has always been a key element on ensuring the performance of complex distributed systems, being a first step to control quality of service, detect anomalies, or make decisions about resource allocation and job scheduling, to name a few. Edge computing is a new type of distributed computing, where data processing is performed by a large number of heterogeneous devices close to the place where the data is generated. Some of the differences between this approach and more traditional architectures, like cloud or high performance computing, are that these devices have low computing power, have unstable connectivity, and are geo-distributed or even mobile. All of these aforementioned characteristics establish new requirements for monitoring tools, such as customized monitoring workflows or choosing different back-ends for the metrics, depending on the device hosting them. In this paper, we present a study of the requirements that an edge monitoring tool should meet, based on motivating scenarios drawn from literature. Additionally, we implement these requirements in a monitoring tool named FMonE. This framework allows deploying monitoring workflows that conform to the specific demands of edge computing systems. We evaluate FMonE by simulating a fog environment in the Grid’5000 testbed and we demonstrate that it fulfills the requirements we previously enumerated.


Sign in / Sign up

Export Citation Format

Share Document