scholarly journals An objective and efficient method for estimating probabilistic coastal inundation hazards

2019 ◽  
Vol 99 (2) ◽  
pp. 1105-1130 ◽  
Author(s):  
Kun Yang ◽  
Vladimir Paramygin ◽  
Y. Peter Sheng

Abstract The joint probability method (JPM) is the traditional way to determine the base flood elevation due to storm surge, and it usually requires simulation of storm surge response from tens of thousands of synthetic storms. The simulated storm surge is combined with probabilistic storm rates to create flood maps with various return periods. However, the map production requires enormous computational cost if state-of-the-art hydrodynamic models with high-resolution numerical grids are used; hence, optimal sampling (JPM-OS) with a small number of (~ 100–200) optimal (representative) storms is preferred. This paper presents a significantly improved JPM-OS, where a small number of optimal storms are objectively selected, and simulated storm surge responses of tens of thousands of storms are accurately interpolated from those for the optimal storms using a highly efficient kriging surrogate model. This study focuses on Southwest Florida and considers ~ 150 optimal storms that are selected based on simulations using either the low fidelity (with low resolution and simple physics) SLOSH model or the high fidelity (with high resolution and comprehensive physics) CH3D model. Surge responses to the optimal storms are simulated using both SLOSH and CH3D, and the flood elevations are calculated using JPM-OS with highly efficient kriging interpolations. For verification, the probabilistic inundation maps are compared to those obtained by the traditional JPM and variations of JPM-OS that employ different interpolation schemes, and computed probabilistic water levels are compared to those calculated by historical storm methods. The inundation maps obtained with the JPM-OS differ less than 10% from those obtained with JPM for 20,625 storms, with only 4% of the computational time.

Buildings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 302
Author(s):  
Hafiz Suliman Munawar ◽  
Riya Aggarwal ◽  
Zakria Qadir ◽  
Sara Imran Khan ◽  
Abbas Z. Kouzani ◽  
...  

Detecting buildings from high-resolution satellite imagery is beneficial in mapping, environmental preparation, disaster management, military planning, urban planning and research purposes. Differentiating buildings from the images is possible however, it may be a time-consuming or complicated process. Therefore, the high-resolution imagery from satellites needs to be automated to detect the buildings. Additionally, buildings exhibit several different characteristics, and their appearance in these images is unplanned. Moreover, buildings in the metropolitan environment are typically crowded and complicated. Therefore, it is challenging to identify the building and hard to locate them. To resolve this situation, a novel probabilistic method has been suggested using local features and probabilistic approaches. A local feature extraction technique was implemented, which was used to calculate the probability density function. The locations in the image were represented as joint probability distributions and were used to estimate their probability distribution function (pdf). The density of building locations in the image was extracted. Kernel density distribution was also used to find the density flow for different metropolitan cities such as Sydney (Australia), Tokyo (Japan), and Mumbai (India), which is useful for distribution intensity and pattern of facility point f interest (POI). The purpose system can detect buildings/rooftops and to test our system, we choose some crops with panchromatic high-resolution satellite images from Australia and our results looks promising with high efficiency and minimal computational time for feature extraction. We were able to detect buildings with shadows and building without shadows in 0.4468 (seconds) and 0.5126 (seconds) respectively.


Author(s):  
Taylor G. Asher ◽  
Jennifer L. Irish ◽  
Donald T. Resio

Probabilistic flood hazard assessments have advanced substantially, with modern methods for dealing with the risk from tropical cyclones utilizing either a variation of the joint probability method with optimal sampling (JPM-OS)2,3 or the statistical deterministic track method (SDTM)1,4. In the JPM-OS, tropical cyclones are reduced to a set of 5 to 9 parameters, whose characteristics are analyzed statistically to develop a joint probability distribution for tropical cyclones of given characteristics. In the SDTM, cyclogenesis of a large number of storms is seeded via a statistical model from historical data, then storms are propagated using one of several different methods, incorporating varying degrees of the physics of cyclone transformation as the storms propagate. Due to the significant cost of storm surge simulations, some form of optimization or selection is then performed to reduce the number of synthetic storms that must be simulated to determine the flood elevation corresponding to a given recurrence interval (e.g. the so-called 100-year flood). In both methods, substantial uncertainties exist, which have a tendency to increase the estimated flooding risk. Efforts to account for these uncertainties have varied, and there remains significant work to be done. Here, we demonstrate how these uncertainties tend to increase the flood risk and show that additional sources of uncertainty remain to be accounted for.


Author(s):  
Andrew Kennedy ◽  
Damrongsak Wirasaet ◽  
Diogo Bolster ◽  
J. Casey Dietrich

Modern storm surge models to predict hurricane water levels have gone in two opposite directions: (1) Low resolution, fast, models that may be run thousands of times as a storm approaches land; and (2) High resolution, more accurate, models that are largely used for planning and hindcasts, and are too slow for real-time ensemble forecasts. Differences in predictions between the two types of models are particularly large over flooded ground, which is most important for human activities.


MAUSAM ◽  
2021 ◽  
Vol 48 (4) ◽  
pp. 587-594
Author(s):  
WANG XIUQIN ◽  
WANG JINGYONG

In the present paper the maximum storm surge elevations with certain return years were calculated by using a joint probability method. Based on the analyses of the typhoons which, affected coastal zone of Guangdong Province in history, a group of model typhoons was established. A number of parameters, which described the typhoons, were selected. The data of each parameter I was graded into a few sub-groups according to their values, and this was done in accordance with the historical observations. The probability of each value of the parameters was calculated based on the historical records. The probability of a typhoon with a group of values of parameters could be calculated. Simulation results of the storm surges caused by the above model typhoons with their probabilities were analysed statistically. Thus an accumulated probability curve and maximum elevations with certain return years were obtained. A number of spots was selected. At some of the spots there are tidal stations and at the others there are none. The maximum elevations with certain return years at the spots were calculated and the results were found satisfactory. By using this method all the meteorological and hydrological data, which were available, can be fully utilized. This method is most suitable for calculating the  maximum elevations at a place where there is no tidal station or at many places simultaneously.    


Author(s):  
Tu Huynh-Kha ◽  
Thuong Le-Tien ◽  
Synh Ha ◽  
Khoa Huynh-Van

This research work develops a new method to detect the forgery in image by combining the Wavelet transform and modified Zernike Moments (MZMs) in which the features are defined from more pixels than in traditional Zernike Moments. The tested image is firstly converted to grayscale and applied one level Discrete Wavelet Transform (DWT) to reduce the size of image by a half in both sides. The approximation sub-band (LL), which is used for processing, is then divided into overlapping blocks and modified Zernike moments are calculated in each block as feature vectors. More pixels are considered, more sufficient features are extracted. Lexicographical sorting and correlation coefficients computation on feature vectors are next steps to find the similar blocks. The purpose of applying DWT to reduce the dimension of the image before using Zernike moments with updated coefficients is to improve the computational time and increase exactness in detection. Copied or duplicated parts will be detected as traces of copy-move forgery manipulation based on a threshold of correlation coefficients and confirmed exactly from the constraint of Euclidean distance. Comparisons results between proposed method and related ones prove the feasibility and efficiency of the proposed algorithm.


Author(s):  
Erik Paul ◽  
Holger Herzog ◽  
Sören Jansen ◽  
Christian Hobert ◽  
Eckhard Langer

Abstract This paper presents an effective device-level failure analysis (FA) method which uses a high-resolution low-kV Scanning Electron Microscope (SEM) in combination with an integrated state-of-the-art nanomanipulator to locate and characterize single defects in failing CMOS devices. The presented case studies utilize several FA-techniques in combination with SEM-based nanoprobing for nanometer node technologies and demonstrate how these methods are used to investigate the root cause of IC device failures. The methodology represents a highly-efficient physical failure analysis flow for 28nm and larger technology nodes.


2020 ◽  
Vol 20 (2) ◽  
pp. 489-504 ◽  
Author(s):  
Anaïs Couasnon ◽  
Dirk Eilander ◽  
Sanne Muis ◽  
Ted I. E. Veldkamp ◽  
Ivan D. Haigh ◽  
...  

Abstract. The interaction between physical drivers from oceanographic, hydrological, and meteorological processes in coastal areas can result in compound flooding. Compound flood events, like Cyclone Idai and Hurricane Harvey, have revealed the devastating consequences of the co-occurrence of coastal and river floods. A number of studies have recently investigated the likelihood of compound flooding at the continental scale based on simulated variables of flood drivers, such as storm surge, precipitation, and river discharges. At the global scale, this has only been performed based on observations, thereby excluding a large extent of the global coastline. The purpose of this study is to fill this gap and identify regions with a high compound flooding potential from river discharge and storm surge extremes in river mouths globally. To do so, we use daily time series of river discharge and storm surge from state-of-the-art global models driven with consistent meteorological forcing from reanalysis datasets. We measure the compound flood potential by analysing both variables with respect to their timing, joint statistical dependence, and joint return period. Our analysis indicates many regions that deviate from statistical independence and could not be identified in previous global studies based on observations alone, such as Madagascar, northern Morocco, Vietnam, and Taiwan. We report possible causal mechanisms for the observed spatial patterns based on existing literature. Finally, we provide preliminary insights on the implications of the bivariate dependence behaviour on the flood hazard characterisation using Madagascar as a case study. Our global and local analyses show that the dependence structure between flood drivers can be complex and can significantly impact the joint probability of discharge and storm surge extremes. These emphasise the need to refine global flood risk assessments and emergency planning to account for these potential interactions.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Israel F. Araujo ◽  
Daniel K. Park ◽  
Francesco Petruccione ◽  
Adenilton J. da Silva

AbstractAdvantages in several fields of research and industry are expected with the rise of quantum computers. However, the computational cost to load classical data in quantum computers can impose restrictions on possible quantum speedups. Known algorithms to create arbitrary quantum states require quantum circuits with depth O(N) to load an N-dimensional vector. Here, we show that it is possible to load an N-dimensional vector with exponential time advantage using a quantum circuit with polylogarithmic depth and entangled information in ancillary qubits. Results show that we can efficiently load data in quantum devices using a divide-and-conquer strategy to exchange computational time for space. We demonstrate a proof of concept on a real quantum device and present two applications for quantum machine learning. We expect that this new loading strategy allows the quantum speedup of tasks that require to load a significant volume of information to quantum devices.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 645
Author(s):  
Muhammad Farooq ◽  
Sehrish Sarfraz ◽  
Christophe Chesneau ◽  
Mahmood Ul Hassan ◽  
Muhammad Ali Raza ◽  
...  

Expectiles have gained considerable attention in recent years due to wide applications in many areas. In this study, the k-nearest neighbours approach, together with the asymmetric least squares loss function, called ex-kNN, is proposed for computing expectiles. Firstly, the effect of various distance measures on ex-kNN in terms of test error and computational time is evaluated. It is found that Canberra, Lorentzian, and Soergel distance measures lead to minimum test error, whereas Euclidean, Canberra, and Average of (L1,L∞) lead to a low computational cost. Secondly, the performance of ex-kNN is compared with existing packages er-boost and ex-svm for computing expectiles that are based on nine real life examples. Depending on the nature of data, the ex-kNN showed two to 10 times better performance than er-boost and comparable performance with ex-svm regarding test error. Computationally, the ex-kNN is found two to five times faster than ex-svm and much faster than er-boost, particularly, in the case of high dimensional data.


2021 ◽  
Vol 11 (2) ◽  
pp. 813
Author(s):  
Shuai Teng ◽  
Zongchao Liu ◽  
Gongfa Chen ◽  
Li Cheng

This paper compares the crack detection performance (in terms of precision and computational cost) of the YOLO_v2 using 11 feature extractors, which provides a base for realizing fast and accurate crack detection on concrete structures. Cracks on concrete structures are an important indicator for assessing their durability and safety, and real-time crack detection is an essential task in structural maintenance. The object detection algorithm, especially the YOLO series network, has significant potential in crack detection, while the feature extractor is the most important component of the YOLO_v2. Hence, this paper employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection result, among which the AP value is 0.89, 0, and 0 for ‘resnet18’, ‘alexnet’, and ‘vgg16’, respectively meanwhile, the ‘googlenet’ (AP = 0.84) and ‘mobilenetv2’ (AP = 0.87) also demonstrate comparable AP values. In terms of computing speed, the ‘alexnet’ takes the least computational time, the ‘squeezenet’ and ‘resnet18’ are ranked second and third respectively; therefore, the ‘resnet18’ is the best feature extractor model in terms of precision and computational cost. Additionally, through the parametric study (influence on detection results of the training epoch, feature extraction layer, and testing image size), the associated parameters indeed have an impact on the detection results. It is demonstrated that: excellent crack detection results can be achieved by the YOLO_v2 detector, in which an appropriate feature extractor model, training epoch, feature extraction layer, and testing image size play an important role.


Sign in / Sign up

Export Citation Format

Share Document