scholarly journals Deriving Large-Scale Coastal Bathymetry from Sentinel-2 Images Using an HIGH-Performance Cluster: A Case Study Covering North Africa’s Coastal Zone

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7006
Author(s):  
Mohamed Wassim Baba ◽  
Gregoire Thoumyre ◽  
Erwin W. J. Bergsma ◽  
Christopher J. Daly ◽  
Rafael Almar

Coasts are areas of vitality because they host numerous activities worldwide. Despite their major importance, the knowledge of the main characteristics of the majority of coastal areas (e.g., coastal bathymetry) is still very limited. This is mainly due to the scarcity and lack of accurate measurements or observations, and the sparsity of coastal waters. Moreover, the high cost of performing observations with conventional methods does not allow expansion of the monitoring chain in different coastal areas. In this study, we suggest that the advent of remote sensing data (e.g., Sentinel 2A/B) and high performance computing could open a new perspective to overcome the lack of coastal observations. Indeed, previous research has shown that it is possible to derive large-scale coastal bathymetry from S-2 images. The large S-2 coverage, however, leads to a high computational cost when post-processing the images. Thus, we develop a methodology implemented on a High-Performance cluster (HPC) to derive the bathymetry from S-2 over the globe. In this paper, we describe the conceptualization and implementation of this methodology. Moreover, we will give a general overview of the generated bathymetry map for NA compared with the reference GEBCO global bathymetric product. Finally, we will highlight some hotspots by looking closely to their outputs.

2020 ◽  
Vol 10 (7) ◽  
pp. 2634
Author(s):  
JunWeon Yoon ◽  
TaeYoung Hong ◽  
ChanYeol Park ◽  
Seo-Young Noh ◽  
HeonChang Yu

High-performance computing (HPC) uses many distributed computing resources to solve large computational science problems through parallel computation. Such an approach can reduce overall job execution time and increase the capacity of solving large-scale and complex problems. In the supercomputer, the job scheduler, the HPC’s flagship tool, is responsible for distributing and managing the resources of large systems. In this paper, we analyze the execution log of the job scheduler for a certain period of time and propose an optimization approach to reduce the idle time of jobs. In our experiment, it has been found that the main root cause of delayed job is highly related to resource waiting. The execution time of the entire job is affected and significantly delayed due to the increase in idle resources that must be ready when submitting the large-scale job. The backfilling algorithm can optimize the inefficiency of these idle resources and help to reduce the execution time of the job. Therefore, we propose the backfilling algorithm, which can be applied to the supercomputer. This experimental result shows that the overall execution time is reduced.


Author(s):  
Anh Tran ◽  
Yan Wang ◽  
John Furlan ◽  
Krishnan V. Pagalthivarthi ◽  
Mohamed Garman ◽  
...  

Abstract Dedicated to the memory of John Furlan. Wear prediction is important in designing reliable machinery for slurry industry. It usually relies on multi-phase computational fluid dynamics, which is accurate but computationally expensive. Each run of the simulations can take hours or days even on a high-performance computing platform. The high computational cost prohibits a large number of simulations in the process of design optimization. In contrast to physics-based simulations, data-driven approaches such as machine learning are capable of providing accurate wear predictions at a small fraction of computational costs, if the models are trained properly. In this paper, a recently developed WearGP framework [1] is extended to predict the global wear quantities of interest by constructing Gaussian process surrogates. The effects of different operating conditions are investigated. The advantages of the WearGP framework are demonstrated by its high accuracy and low computational cost in predicting wear rates.


2021 ◽  
Vol 32 (8) ◽  
pp. 2035-2048
Author(s):  
Mochamad Asri ◽  
Dhairya Malhotra ◽  
Jiajun Wang ◽  
George Biros ◽  
Lizy K. John ◽  
...  

2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


2016 ◽  
Vol 33 (4) ◽  
pp. 621-634 ◽  
Author(s):  
Jingyin Tang ◽  
Corene J. Matyas

AbstractThe creation of a 3D mosaic is often the first step when using the high-spatial- and temporal-resolution data produced by ground-based radars. Efficient yet accurate methods are needed to mosaic data from dozens of radar to better understand the precipitation processes in synoptic-scale systems such as tropical cyclones. Research-grade radar mosaic methods of analyzing historical weather events should utilize data from both sides of a moving temporal window and process them in a flexible data architecture that is not available in most stand-alone software tools or real-time systems. Thus, these historical analyses require a different strategy for optimizing flexibility and scalability by removing time constraints from the design. This paper presents a MapReduce-based playback framework using Apache Spark’s computational engine to interpolate large volumes of radar reflectivity and velocity data onto 3D grids. Designed as being friendly to use on a high-performance computing cluster, these methods may also be executed on a low-end configured machine. A protocol is designed to enable interoperability with GIS and spatial analysis functions in this framework. Open-source software is utilized to enhance radar usability in the nonspecialist community. Case studies during a tropical cyclone landfall shows this framework’s capability of efficiently creating a large-scale high-resolution 3D radar mosaic with the integration of GIS functions for spatial analysis.


2021 ◽  
Author(s):  
Mohsen Hadianpour ◽  
Ehsan Rezayat ◽  
Mohammad-Reza Dehaqani

Abstract Due to the significantly drastic progress and improvement in neurophysiological recording technologies, neuroscientists have faced various complexities dealing with unstructured large-scale neural data. In the neuroscience community, these complexities could create serious bottlenecks in storing, sharing, and processing neural datasets. In this article, we developed a distributed high-performance computing (HPC) framework called `Big neuronal data framework' (BNDF), to overcome these complexities. BNDF is based on open-source big data frameworks, Hadoop and Spark providing a flexible and scalable structure. We examined BNDF on three different large-scale electrophysiological recording datasets from nonhuman primate’s brains. Our results exhibited faster runtimes with scalability due to the distributed nature of BNDF. We compared BNDF results to a widely used platform like MATLAB in an equitable computational resource. Compared with other similar methods, using BNDF provides more than five times faster performance in spike sorting as a usual neuroscience application.


2017 ◽  
Vol 33 (2) ◽  
pp. 119-130
Author(s):  
Vinh Van Le ◽  
Hoai Van Tran ◽  
Hieu Ngoc Duong ◽  
Giang Xuan Bui ◽  
Lang Van Tran

Metagenomics is a powerful approach to study environment samples which do not require the isolation and cultivation of individual organisms. One of the essential tasks in a metagenomic project is to identify the origin of reads, referred to as taxonomic assignment. Due to the fact that each metagenomic project has to analyze large-scale datasets, the metatenomic assignment is very much computation intensive. This study proposes a parallel algorithm for the taxonomic assignment problem, called SeMetaPL, which aims to deal with the computational challenge. The proposed algorithm is evaluated with both simulated and real datasets on a high performance computing system. Experimental results demonstrate that the algorithm is able to achieve good performance and utilize resources of the system efficiently. The software implementing the algorithm and all test datasets can be downloaded at http://it.hcmute.edu.vn/bioinfo/metapro/SeMetaPL.html.


Sign in / Sign up

Export Citation Format

Share Document