extreme scale
Recently Published Documents


TOTAL DOCUMENTS

476
(FIVE YEARS 100)

H-INDEX

31
(FIVE YEARS 4)

2022 ◽  
pp. 0309524X2110693
Author(s):  
Alejandra S Escalera Mendoza ◽  
Shulong Yao ◽  
Mayank Chetan ◽  
Daniel Todd Griffith

Extreme-size wind turbines face logistical challenges due to their sheer size. A solution, segmentation, is examined for an extreme-scale 50 MW wind turbine with 250 m blades using a systematic approach. Segmentation poses challenges regarding minimizing joint mass, transferring loads between segments and logistics. We investigate the feasibility of segmenting a 250 m blade by developing design methods and analyzing the impact of segmentation on the blade mass and blade frequencies. This investigation considers various variables such as joint types (bolted and bonded), adhesive materials, joint locations, number of joints and taper ratios (ply dropping). Segmentation increases blade mass by 4.1%–62% with bolted joints and by 0.4%–3.6% with bonded joints for taper ratios up to 1:10. Cases with large mass growth significantly reduce blade frequencies potentially challenging the control design. We show that segmentation of an extreme-scale blade is possible but mass reduction is necessary to improve its feasibility.


Author(s):  
Emmanuel Agullo ◽  
Mirco Altenbernd ◽  
Hartwig Anzt ◽  
Leonardo Bautista-Gomez ◽  
Tommaso Benacchio ◽  
...  

This work is based on the seminar titled ‘Resiliency in Numerical Algorithm Design for Extreme Scale Simulations’ held March 1–6, 2020, at Schloss Dagstuhl, that was attended by all the authors. Advanced supercomputing is characterized by very high computation speeds at the cost of involving an enormous amount of resources and costs. A typical large-scale computation running for 48 h on a system consuming 20 MW, as predicted for exascale systems, would consume a million kWh, corresponding to about 100k Euro in energy cost for executing 1023 floating-point operations. It is clearly unacceptable to lose the whole computation if any of the several million parallel processes fails during the execution. Moreover, if a single operation suffers from a bit-flip error, should the whole computation be declared invalid? What about the notion of reproducibility itself: should this core paradigm of science be revised and refined for results that are obtained by large-scale simulation? Naive versions of conventional resilience techniques will not scale to the exascale regime: with a main memory footprint of tens of Petabytes, synchronously writing checkpoint data all the way to background storage at frequent intervals will create intolerable overheads in runtime and energy consumption. Forecasts show that the mean time between failures could be lower than the time to recover from such a checkpoint, so that large calculations at scale might not make any progress if robust alternatives are not investigated. More advanced resilience techniques must be devised. The key may lie in exploiting both advanced system features as well as specific application knowledge. Research will face two essential questions: (1) what are the reliability requirements for a particular computation and (2) how do we best design the algorithms and software to meet these requirements? While the analysis of use cases can help understand the particular reliability requirements, the construction of remedies is currently wide open. One avenue would be to refine and improve on system- or application-level checkpointing and rollback strategies in the case an error is detected. Developers might use fault notification interfaces and flexible runtime systems to respond to node failures in an application-dependent fashion. Novel numerical algorithms or more stochastic computational approaches may be required to meet accuracy requirements in the face of undetectable soft errors. These ideas constituted an essential topic of the seminar. The goal of this Dagstuhl Seminar was to bring together a diverse group of scientists with expertise in exascale computing to discuss novel ways to make applications resilient against detected and undetected faults. In particular, participants explored the role that algorithms and applications play in the holistic approach needed to tackle this challenge. This article gathers a broad range of perspectives on the role of algorithms, applications and systems in achieving resilience for extreme scale simulations. The ultimate goal is to spark novel ideas and encourage the development of concrete solutions for achieving such resilience holistically.


2021 ◽  
Author(s):  
Honghui Shang ◽  
Fang Li ◽  
Yunquan Zhang ◽  
Libo Zhang ◽  
You Fu ◽  
...  

Author(s):  
Alec T. Nabb ◽  
Marvin Bentley

Neurons are polarized cells of extreme scale and compartmentalization. To fulfill their role in electrochemical signaling, axons must maintain a specific complement of membrane proteins. Despite being subject of considerable attention, the trafficking pathway of axonal membrane proteins is not well understood. Two pathways, direct delivery and transcytosis, have been proposed. Previous studies reached contradictory conclusions about which of these mediates delivery of axonal membrane proteins to their destination, in part because they evaluated long-term distribution changes and not vesicle transport. We developed a novel strategy to selectively label vesicles in different trafficking pathways and determined the trafficking of two canonical axonal membrane proteins, NgCAM and VAMP2. Results from detailed quantitative analyses of transporting vesicles differed substantially from previous studies and found that axonal membrane proteins overwhelmingly undergo direct delivery. Transcytosis plays only a minor role in axonal delivery of these proteins. In addition, we identified a novel pathway by which wayward axonal proteins that reach the dendritic plasma membrane are targeted to lysosomes. These results redefine how axonal proteins achieve their polarized distribution, a crucial requirement for elucidating the underlying molecular mechanisms. [Media: see text] [Media: see text] [Media: see text] [Media: see text]


2021 ◽  
Vol 64 (11) ◽  
pp. 60-63
Author(s):  
Yiming Zhang ◽  
Kai Lu ◽  
Wenguang Chen
Keyword(s):  

2021 ◽  
Vol 925 (1) ◽  
pp. 012050
Author(s):  
Ariviana Vilda ◽  
Lee Jung Lyul

Abstract Sea level rise (SLR) is become more serious on a global scale and has become one of the main reasons causes shoreline changes, and erosion, even on an extreme scale can cause the sinking of coastal areas and islands. It was recorded that many big cities were damaged by SLR. The Bruun rule is the most widely used method for predicting the horizontal translation of the shoreline associated with a given rise in sea level. In this study, however, the change in the average shoreline at the convex beach, which is more vulnerable to erosion due to sea level rise, is investigated. The increase in water depth by sea level rise causes a change in the wave crestline, ultimately leading to a linearization of the shoreline. In general, it is assumed that the annual average shoreline is parallel to the annual mean wave crestline. Moreover, assuming that the equilibrium depth contour is formed according to the crestline, the retreat of the shoreline is predicted. The shoreline change is indirectly predicted through the wave crestline deformation obtained from a wave model and this method is applied to the convex beach. Our result showed that for a convex beach with a length of 1 km has open ends with free littoral drift at both ends, the sea level rise of 1 m cause the erosion of 10 m in the protruding area, and the sea level rise of 2 m causes erosion of 23 m. However, if the convex beach is blocked at both ends, sea level rise of 1 m causes the erosion of 6.3 m in the convex area, but the shoreline advance of 3.8 m at both ends, and if the sea level rise of 2 m occurs, the erosion of 14.3 m can occur in the convex area and shoreline advance of 8.6 m can occur at both ends.


Sign in / Sign up

Export Citation Format

Share Document