Implementation of a numerical methodology for the stochastic characterization of the Valdivia 1960 9.5 Mw tsunami source.

Author(s):  
Rodrigo Cifuentes-Lobos ◽  
Ignacia Calisto ◽  
Cristian Saavedra ◽  
Franchesca Ormeño ◽  
Javiera San Martín ◽  
...  

<p><br>Probabilistic Tsunami Hazard Assessment (PTHA) brings a variety of mathematical and numerical tools for evaluating long-term exposure to tsunami related hazards in coastal communities, within which the logic tree method stands out for its usefulness in generating random slip models and dealing with epistemic and aleatory uncertainties. Key items for the stochastic study of tsunami scenarios. This method, by combining parameters that define a source model (such as magnitude, and rupture limits), allows to create a vast number of random source models that, as well as they can be used for assessing future and long-term hazard, they can also be used in conjunction with data and observations obtained from past tsunamis and earthquakes in their study.<br><br>This study proposes a numerical methodology for the generation of random tsunami source models, based on the logic tree method, for studying paleo tsunamis and historical tsunamis. In this case this methodology will be tested with data from the great Valdivia 1960 9.5 Mw earthquake and tsunami. These random source models are then filtered using empirical relations between magnitudes and rupture dimensions or rupture aspect ratios. Those models that pass this filter are then used to compute deformation using the Okada, 1985 method. This deformation fields are filtered using geodetic data and observations associated with the event of interest, eliminating all models that doesn’t satisfy these observations. In contrast, all models that pass this filter, are used as inputs to model tsunami using a staggered scheme, first modelling with low resolution topobathymetry grids, in order to assess if tsunami waves are registered in locations that are known to have been inundated and eliminate the models that do not show this behaviour. And secondly, using the deformation models that satisfy this past filter as input, high resolution grids are used to model tsunami and appraise the estimated run up of inundations and compare it with reliable historical accounts and sedimentological observations. Those models that pass all the filters mentioned above, will be subjects to statistical analysis to compare them with existent models of the Valdivia 1960 earthquake.<br><br>As it was stated above, and due to the important number of published studies, data and historical accounts, and source models available, the Valdivia 1960 9.5 Mw earthquake will be used as a benchmark to test this methodology, in order to appraise the convergence of the random models that pass every filter to the existent source models. It is important to further specify that this methodology was designed to study historical and paleo tsunamis, and will only be tested with modern tsunamis, such as Valdivia 1960.</p>

2014 ◽  
Vol 9 (3) ◽  
pp. 272-280 ◽  
Author(s):  
Kenji Satake ◽  
◽  
Yushiro Fujii ◽  

Numerous source models of the 2011 Tohoku earthquake have been proposed based on seismic, geodetic and tsunami data. Common features include a seismic moment of ∼ 4×1022 Nm, a duration of up to ∼ 160 s, and the largest slip of about 50 m east of the epicenter. Exact locations of this largest slip differ with the model, but all show considerable slip near the trench axis where plate coupling was considered to be weak and also at deeper part where M∼7 earthquakes repeatedly occurred at average 37-year intervals. The long-term forecast of large earthquakes made by the Earthquake Research Committee was based on earthquakes occurring in the last few centuries and did not consider such a giant earthquake. Among the several issues remaining unsolved is the tsunami source model. Coastal tsunami height distribution requires a tsunami source delayed by a few minutes and extending north of the epicenter, but seismic data do not indicate such a delayed rupture and there is no clear evidence of additional sources such as submarine landslides along the trench axis. Long-term forecast of giant earthquakes must incorporate non-characteristic models such as earthquake occurrence supercycles, assessments of maximum earthquake size independent of past data, and plate coupling based on marine geodetic data. To assess ground shaking and tsunami in presumed M∼9 earthquakes, characterization and scaling relation fromglobal earthquakes must be used.


Oryx ◽  
2021 ◽  
pp. 1-9
Author(s):  
Harry Olgun ◽  
Mzee Khamis Mohammed ◽  
Abbas Juma Mzee ◽  
M. E. Landry Green ◽  
Tim R. B. Davenport ◽  
...  

Abstract Roads affect wildlife in a variety of negative ways. Road ecology studies have mostly concentrated on areas in the northern hemisphere despite the potentially greater impact of roads on biodiversity in tropical habitats. Here, we examine 4 years (January 2016–December 2019) of opportunistic observations of mammalian roadkill along a road intersecting Jozani-Chwaka Bay National Park, Unguja, Zanzibar. In particular, we assess the impact of collisions on the population of an endemic primate, the Endangered Zanzibar red colobus Piliocolobus kirkii. Primates accounted for the majority of roadkill in this dataset. Monthly rainfall was not associated with roadkill frequency for mammals generally, nor for the Zanzibar red colobus. No single age–sex class of colobus was found dead more often than expected given their occurrence in the local population. The overall effect of roadkill on colobus populations in habitats fragmented by roads is unknown given the lack of accurate, long-term life history data for this species. Our findings suggest that mortality from collisions with vehicles in some groups of colobus is within the range of mortality rates other primates experience under natural predation. Unlike natural predators, however, vehicles do not kill selectively, so their impact on populations may differ. Although a comparison with historical accounts suggests that the installation of speedbumps along the road near the Park's entrance has led to a significant decrease in colobus roadkill, further actions to mitigate the impact of the road could bring substantial conservation benefits.


Geosciences ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 147
Author(s):  
Benjamin R. Jordan

Kukuiho’olua Island is an islet that lies 164 m due north of Laie Point, a peninsula of cemented, coastal, Pleistocene and Holocene sand dunes. Kukuiho’olua Island consists of the same dune deposits as Laie Point and is cut by a sea arch, which, documented here for first time, may have formed during the 1 April 1946 “April Fools’s Day Tsunami.” The tsunami-source of formation is supported by previous modeling by other authors, which indicated that the geometry of overhanging sea cliffs can greatly strengthen and focus the force of tsunami waves. Additional changes occurred to the island and arch during the 2015–2016 El Niño event, which was one of the strongest on record. During the event, anomalous wave heights and reversed wind directions occurred across the Pacific. On the night of 24–25 February 2016, large storm waves, resulting from the unique El Niño conditions washed out a large boulder that had lain within the arch since its initial formation, significantly increasing the open area beneath the arch. Large waves also rose high enough for seawater to flow over the peninsula at Laie Point, causing significant erosion of its upper surface. These changes at Laie Point and Kukuio’olua Island serve as examples of long-term, intermittent change to a coastline—changes that, although infrequent, can occur quickly and dramatically, potentially making them geologic hazards.


Author(s):  
Steven W. Burd ◽  
Terrence W. Simon

The vast number of turbine cascade studies in the literature has been performed in straight-endwall, high-aspect-ratio, linear cascades. As a result, there has been little appreciation for the role of, and added complexity imposed by, reduced aspect ratios. There also has been little documentation of endwall profiling at these reduced spans. To examine the role of these factors on cascade hydrodynamics, a large-scale nozzle guide vane simulator was constructed at the Heat Transfer Laboratory of the University of Minnesota. This cascade is comprised of three airfoils between one contoured and one flat endwall. The geometries of the airfoils and endwalls, as well as the experimental conditions in the simulator, are representative of those in commercial operation. Measurements with hot-wire anemometry were taken to characterize the flow approaching the cascade. These measurements show that the flow field in this cascade is highly elliptic and influenced by pressure gradients that are established within the cascade. Exit flow field measurements with triple-sensor anemometry and pressure measurements within the cascade indicate that the acceleration imposed by endwall contouring and airfoil turning is able to suppress the size and strength of key secondary flow features. In addition, the flow field near the contoured endwall differs significantly from that adjacent to the straight endwall.


2021 ◽  
Author(s):  
Matthew W. Hayward ◽  
Colin N. Whittaker ◽  
Emily M. Lane ◽  
William Power ◽  
Stéphane Popinet ◽  
...  

Abstract. Theoretical source models of underwater explosions are often applied in studying tsunami hazards associated with submarine volcanism; however, their use in numerical codes based on the shallow water equations can neglect the significant dispersion of the generated wavefield. A non-hydrostatic multilayer method is validated against a laboratory-scale experiment of wave generation from instantaneous disturbances and at field-scale submarine explosions at Mono Lake, California, utilising the relevant theoretical models. The numerical method accurately reproduces the range of observed wave characteristics for positive disturbances and suggests a previously unreported relationship of extended initial troughs for negative disturbances at low dispersivity and high nonlinearity parameters. Satisfactory amplitudes and phase velocities within the initial wave group are found using underwater explosion models at Mono Lake. The scheme is then applied to modelling tsunamis generated by volcanic explosions at Lake Taupō, New Zealand, for a magnitude range representing ejecta volumes between 0.04–0.4 km3. Waves reach all shores within 15 minutes with maximum incident crest amplitudes around 4 m at shores near the source. This work shows that the multilayer scheme used is computationally efficient and able to capture a wide range of wave characteristics, including dispersive effects, which is necessary when investigating submarine explosions. This research therefore provides the foundation for future studies involving a rigorous probabilistic hazard assessment to quantify the risks and relative significance of this tsunami source mechanism.


2020 ◽  
Vol 34 (04) ◽  
pp. 3405-3413
Author(s):  
Zhaohui Che ◽  
Ali Borji ◽  
Guangtao Zhai ◽  
Suiyi Ling ◽  
Jing Li ◽  
...  

Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of pre-trained source models can transfer to other new target models, thus pose a security threat to black-box applications (when the attackers have no access to the target models). Despite adopting diverse architectures and parameters, source and target models often share similar decision boundaries. Therefore, if an adversary is capable of fooling several source models concurrently, it can potentially capture intrinsic transferable adversarial information that may allow it to fool a broad class of other black-box target models. Current ensemble attacks, however, only consider a limited number of source models to craft an adversary, and obtain poor transferability. In this paper, we propose a novel black-box attack, dubbed Serial-Mini-Batch-Ensemble-Attack (SMBEA). SMBEA divides a large number of pre-trained source models into several mini-batches. For each single batch, we design 3 new ensemble strategies to improve the intra-batch transferability. Besides, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous batch to the following batch. This way, the learned adversarial information can be preserved and the inter-batch transferability can be improved. Experiments indicate that our method outperforms state-of-the-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Our method successfully fools two online black-box saliency prediction systems including DeepGaze-II (Kummerer 2017) and SALICON (Huang et al. 2017). Finally, we also contribute a new repository to promote the research on adversarial attack and defense over pixel-to-pixel tasks: https://github.com/CZHQuality/AAA-Pix2pix.


Author(s):  
Takuya MIYASHITA ◽  
Kazuki KURATA ◽  
Tomohiro YASUDA ◽  
Nobuhito MORI ◽  
Tomoya SHIMURA

Author(s):  
Mireille Rosello

This particular attempt at imagining a site of memory made of words may appear irreverent at first, but it has been crafted as an homage to a formidable woman: Jeanne Duval. I have taken the liberty of fictionalizing a first-person narrator who will talk about ‘herself’, at the risk of usurping her voice and her identity. Jeanne (whose name was or was not Duval) was a woman of colour and she had a long-term turbulent relationship with the enfant terrible of French nineteenth-century poetry, Charles Baudelaire. As a result, historical accounts both magnify and marginalize her. Trying to do justice to a historical character who was so much more than a muse but may not have been happy to embrace the role of exemplary black foremother, this text puts together the numerous and often incompatible portraits of Jeanne Duval. She appears and disappears in biographies (Emmanuel Richon), novels (Fabienne Pasquet), short stories (Angela Carter), academic studies (Claude Pichois). She is both present and absent, celebrated and erased in the so-called ‘Black Venus cycle’ of Baudelaire’s Flower of Evil as well as in paintings by Edouard Manet (Baudelaire’s Mistress, Reclining) and Gustave Courbet (The Painter’s Studio). The objective was to question the process of memorialization that might silence or appropriate her instead of providing her with a safe space of memory. It remains to be seen to what extent Jeanne is here celebrated or betrayed.


1996 ◽  
Vol 427 ◽  
Author(s):  
M. Eizenberg

AbstractTiN has been recognized as an excellent barrier material for W as well as Al planarization gap filling of contacts and vias. The need for conformality over extreme topography necessitates the use of CVD rather than sputtering for the deposition of TiN. In this paper we will first review the various deposition techniques of CVD TiN. Then, we will present a recently developed approach: thermal decomposition of TDMAT followed by nitrogen-based rf plasma treatments for resistivity reduction. This approach utilizes the advantages of thermal decomposition: excellent step coverage, good barrier properties, and low particle content. The resistivity reduction of the post deposition plasma treatment is followed by excellent stability upon long term air exposure. Vias and salicide contacts utilizing this unique process exhibit resistance values equivalent to those obtained when sputtered TiN is used. Conformal films as thin as 200Å can be utilized as excellent barriers for deep sub-0.5μm devices with large aspect ratios, where sputtered TiN can not be used any more.


Sign in / Sign up

Export Citation Format

Share Document