scholarly journals The 21st Century Decline in Damaging European Windstorms

2016 ◽  
Author(s):  
Laura C. Dawkins ◽  
David B. Stephenson ◽  
Julia F. Lockwood ◽  
Paul E. Maisey

Abstract. A decline in damaging European windstorms has led to a reduction in insured losses in the 21st century. This decline is explored by identifying a damaging windstorm characteristic and investigating how and why this characteristic has changed in recent years. This novel exploration is based on 6103 high resolution model generated historical footprints (1979–2014) representing the whole European domain. The footprint of a windstorm is defined as the maximum wind gust speed to occur at a set of spatial locations over the duration of the storm. The area of the footprint exceeding 20 ms−1 over land, A20, is shown to be a good predictor of windstorm damage. This damaging characteristic has decreased in the 21st century, due to a statistically significant decrease in the relative frequency of windstorms exceeding 20 ms−1 in north-west Europe. This is explained by a decrease in the quantiles of the footprint wind gust speed distribution above approximately 18 ms−1 at locations in this region. Much of the change in A20 is explained by the North Atlantic Oscillation (NAO). The correlation between winter total A20 and winter averaged mean sea-level pressure resembles the NAO pattern, shifted eastwards over Europe, and a strong positive relationship (correlation of 0.715) exists between winter total A20 and winter averaged NAO. The shifted correlation pattern, however, suggests that other modes of variability may also play a role in the variation in windstorm losses.

2016 ◽  
Vol 16 (8) ◽  
pp. 1999-2007 ◽  
Author(s):  
Laura C. Dawkins ◽  
David B. Stephenson ◽  
Julia F. Lockwood ◽  
Paul E. Maisey

Abstract. A decline in damaging European windstorms has led to a reduction in insured losses in the 21st century. This decline is explored by identifying a damaging windstorm characteristic and investigating how and why this characteristic has changed in recent years. This novel exploration is based on 6103 high-resolution model-generated historical footprints (1979–2014), representing the whole European domain. The footprint of a windstorm is defined as the maximum wind gust speed to occur at a set of spatial locations over the duration of the storm. The area of the footprint exceeding 20 ms−1 over land, A20, is shown to be a good predictor of windstorm damage. This damaging characteristic has decreased in the 21st century, due to a statistically significant decrease in the relative frequency of windstorms exceeding 20 ms−1 in north-western Europe, although an increase is observed in southern Europe. This is explained by a decrease in the quantiles of the footprint wind gust speed distribution above approximately 18 ms−1 at locations in this region. In addition, an increased variability in the number of windstorm events is observed in the 21st century. Much of the change in A20 is explained by the North Atlantic Oscillation (NAO). The correlation between winter total A20 and winter-averaged mean sea-level pressure resembles the NAO pattern, shifted eastwards over Europe, and a strong positive relationship (correlation of 0.715) exists between winter total A20 and winter-averaged NAO. The shifted correlation pattern, however, suggests that other modes of variability may also play a role in the variation in windstorm losses.


2008 ◽  
Vol 21 (15) ◽  
pp. 3872-3889 ◽  
Author(s):  
Jesse Kenyon ◽  
Gabriele C. Hegerl

Abstract The influence of large-scale modes of climate variability on worldwide summer and winter temperature extremes has been analyzed, namely, that of the El Niño–Southern Oscillation, the North Atlantic Oscillation, and Pacific interdecadal climate variability. Monthly indexes for temperature extremes from worldwide land areas are used describe moderate extremes, such as the number of exceedences of the 90th and 10th climatological percentiles, and more extreme events such as the annual, most extreme temperature. This study examines which extremes show a statistically significant (5%) difference between the positive and negative phases of a circulation regime. Results show that temperature extremes are substantially affected by large-scale circulation patterns, and they show distinct regional patterns of response to modes of climate variability. The effects of the El Niño–Southern Oscillation are seen throughout the world but most clearly around the Pacific Rim and throughout all of North America. Likewise, the influence of Pacific interdecadal variability is strongest in the Northern Hemisphere, especially around the Pacific region and North America, but it extends to the Southern Hemisphere. The North Atlantic Oscillation has a strong continent-wide effect for Eurasia, with a clear but weaker effect over North America. Modes of variability influence the shape of the daily temperature distribution beyond a simple shift, often affecting cold and warm extremes and sometimes daytime and nighttime temperatures differently. Therefore, for reliable attribution of changes in extremes as well as prediction of future changes, changes in modes of variability need to be accounted for.


2016 ◽  
Author(s):  
Emmanuele Russo ◽  
Ulrich Cubasch

Abstract. The improvement in resolution of climate models is always been mentioned as one of the most important factors when investigating past climatic conditions especially in order to evaluate and compare the results against proxy data. In this paper we present for the first time a set of high resolution simulations for different time slices of mid-to-late Holocene performed over Europe using a Regional Climate Model. Through a validation against a new pollen-based climate reconstructions dataset, covering almost all of Europe, we test the model performances for paleoclimatic applications and investigate the response of temperature to variations in the seasonal cycle of insolation, with the aim of clarifying earlier debated uncertainties, giving physically plausible interpretations of both the pollen data and the model results. The results reinforce previous findings showing that summertime temperatures were driven mainly by changes in insolation and that the model is too sensitive to such changes over Southern Europe, resulting in drier and warmer conditions. In winter, instead, the model does not reproduce correctly the same amplitude of changes, even if it captures the main pattern of the pollen dataset over most of the domain for the time periods under investigation. Through the analysis of variations in atmospheric circulation we suggest that, even though in some areas the discrepancies between the two datasets are most likely due to high pollen uncertanties, in general the model seems to underestimate the changes in the amplitude of the North Atlantic Oscillation, overestimating the contribution of secondary modes of variability


2007 ◽  
Vol 135 (10) ◽  
pp. 3587-3598 ◽  
Author(s):  
William M. Frank ◽  
George S. Young

Abstract This paper examines the interannual variability of tropical cyclones in each of the earth’s cyclone basins using data from 1985 to 2003. The data are first analyzed using a Monte Carlo technique to investigate the long-standing myth that the global number of tropical cyclones is less variable than would be expected from examination of the variability in each basin. This belief is found to be false. Variations in the global number of all tropical cyclones are indistinguishable from those that would be expected if each basin was examined independently of the others. Furthermore, the global number of the most intense storms (Saffir–Simpson categories 4–5) is actually more variable than would be expected because of an observed tendency for storm activity to be correlated between basins, and this raises important questions as to how and why these correlations arise. Interbasin correlations and factor analysis of patterns of tropical cyclone activity reveal that there are several significant modes of variability. The largest three factors together explain about 70% of the variance, and each of these factors shows significant correlation with ENSO, the North Atlantic Oscillation (NAO), or both, with ENSO producing the largest effects. The results suggest that patterns of tropical cyclone variability are strongly affected by large-scale modes of interannual variability. The temporal and spatial variations in storm activity are quite different for weaker tropical cyclones (tropical storm through category 2 strength) than for stronger storms (categories 3–5). The stronger storms tend to show stronger interbasin correlations and stronger relationships to ENSO and the NAO than do the weaker storms. This suggests that the factors that control tropical cyclone formation differ in important ways from those that ultimately determine storm intensity.


Ocean Science ◽  
2020 ◽  
Vol 16 (4) ◽  
pp. 831-845
Author(s):  
Ric Crocker ◽  
Jan Maksymczuk ◽  
Marion Mittermaier ◽  
Marina Tonani ◽  
Christine Pequignet

Abstract. The Met Office currently runs two operational ocean forecasting configurations for the North West European Shelf: an eddy-permitting model with a resolution of 7 km (AMM7) and an eddy-resolving model at 1.5 km (AMM15). Whilst qualitative assessments have demonstrated the benefits brought by the increased resolution of AMM15, particularly in the ability to resolve finer-scale features, it has been difficult to show this quantitatively, especially in forecast mode. Applications of typical assessment metrics such as the root mean square error have been inconclusive, as the high-resolution model tends to be penalised more severely, referred to as the double-penalty effect. This effect occurs in point-to-point comparisons whereby features correctly forecast but misplaced with respect to the observations are penalised twice: once for not occurring at the observed location, and secondly for occurring at the forecast location, where they have not been observed. An exploratory assessment of sea surface temperature (SST) has been made at in situ observation locations using a single-observation neighbourhood-forecast (SO-NF) spatial verification method known as the High-Resolution Assessment (HiRA) framework. The primary focus of the assessment was to capture important aspects of methodology to consider when applying the HiRA framework. Forecast grid points within neighbourhoods centred on the observing location are considered as pseudo ensemble members, so that typical ensemble and probabilistic forecast verification metrics such as the continuous ranked probability score (CRPS) can be utilised. It is found that through the application of HiRA it is possible to identify improvements in the higher-resolution model which were not apparent using typical grid-scale assessments. This work suggests that future comparative assessments of ocean models with different resolutions would benefit from using HiRA as part of the evaluation process, as it gives a more equitable and appropriate reflection of model performance at higher resolutions.


2020 ◽  
Author(s):  
Lucia Pineau-Guillou ◽  
Pascal Lazure ◽  
Guy Wöppelmann

Abstract. We investigated the long-term changes of the principal tidal component M2 over the North Atlantic coasts, from 1846 to 2018. We analysed 9 tide gauges with time series starting no later than 1920. The longest is Brest with 165 years of observations. We carefully processed the data, particularly to remove the 18.6-year nodal modulation. We found that M2 variations are consistent at all the stations in the North East Atlantic (Newlyn, Brest, Cuxhaven), whereas some discrepancies appear in the North West Atlantic. The changes started long before the XXth century, and are not linear. The trends vary from a station to another; they are overall positive, up to 0.7 mm/yr. Since 1990, the trends switch from positive to negative values. Concerning the possible causes of the observed changes, the similarity between the North Atlantic Oscillation and M2 variations in the North East Atlantic suggests a possible influence of the large-scale atmospheric circulation on the tide. We discuss a possible underlying mechanism. A different spatial distribution of water heights from one year to another, depending on the low-frequency sea-level pressure patterns, could impact the propagation of the tide in the North Atlantic basin. However, the hypothesis is at present unproven.


2021 ◽  
Vol 20 (4) ◽  
pp. 37-52
Author(s):  
Andrey V. Varenov

In China, rock art is spread mainly in the border regions – carvings and engravings in the north of the country and paintings in the south. Before the beginning of the 21st century, research books and albums of petroglyphs were published in four administrative units at provincial level in the north-west of the county: Inner Mongolia, Ningxia, Xinjiang and Qinghai. Petroglyphs of Inner Mongolia were studied and published by Gai Shanlin, Liang Zhenhua and N. Dalengurib. The earliest and the latest books by Gai Shanlin available to us (published in 1985 and 2002 respectively) were entirely devoted to the interpretation of rock carvings and searches for their analogies. All four monographs on Ningxia rock art – by Zhou Xinghua, Li Xiangshi and Zhu Cunshi, Xu Cheng and Wei Zhong were published almost simultaneously, at the beginning of the 1990s. Ancient rock art of Xinjiang was published in albums by Zhao Yangfeng, Wang Linshan and Wang Bo and in books by Wang Binghua and Su Beihai. The monograph by Tang Huisheng and Zhang Wenhua was devoted to the description of Qinghai petroglyphs and the problems of their interpretation. The album of photos “The Rock Arts of China” is a kind of a guide to the main rock art sites known by 1993 in all the Chinese provinces. Generally, it can be stated that modern Chinese scientific rock art research was born in the first half of the 1980s, when the first articles on rock carvings started to appear in Chinese archaeological periodicals and flourished in the second half of the 1980s and the 1990s, when quite a number of monographs were published.


Sign in / Sign up

Export Citation Format

Share Document