Detecting Synoptic Patterns related to Freezing Rain in Montréal using Deep Learning

Author(s):  
Magdalena Mittermeier ◽  
Émilie Bresson ◽  
Dominique Paquin ◽  
Ralf Ludwig

<p>Climate change is altering the Earth’s atmospheric circulation and the dynamic drivers of extreme events. Extreme weather events pose a great potential risk to infrastructure and human security. In Southern Québec, freezing rain is among the rare, yet high-impact events that remain particularly difficult to detect, describe or even predict.</p><p>Large climate model ensembles are instrumental for a profound analysis of extreme events, as they can be used to provide a sufficient number of model years. Due to the physical nature and the high spatiotemporal resolution of regional climate models (RCMs), large ensembles can not only be employed to investigate the intensity and frequency of extreme events, but they also allow to analyze the synoptic drivers of freezing rain events and to explore the respective dynamic alterations under climate change conditions. However, several challenges remain for the analysis of large RCM ensembles, mainly the high computational costs and the resulting data volume, which requires novel statistical methods for efficient screening and analysis, such as deep neural networks (DNN). Further, to date, only the Canadian Regional Climate Model version 5 (CRCM5) is simulating freezing rain in-line using a diagnostic method. For the analysis of freezing rain in other RCMs, computational intensive, off-line diagnostic schemes have to be applied to archived data. Another approach for freezing rain analysis focuses on the relation between synoptic drivers at 500 hPa resp. sea level pressure and the occurrence of freezing rain in the study area of Montréal.</p><p>Here, we explore the capability of training a deep neural network on the detection of the synoptic patterns associated with the occurrence of freezing rain in Montréal. This climate pattern detection task is a visual image classification problem that is addressed with supervised machine learning. Labels for the training set are derived from CRCM5 in-line simulations of freezing rain. This study aims to provide a trained network, which can be applied to large multi-model ensembles over the North American domain of the Coordinated Regional Climate Downscaling Experiment (CORDEX) in order to efficiently filter the climate datasets for the current and future large-scale drivers of freezing rain.</p><p>We present the setup of the deep learning approach including the network architecture, the training set statistics and the optimization and regularization methods. Additionally, we present the classification results of the deep neural network in the form of a single-number evaluation metric as well as confusion matrices. Furthermore, we show analysis of our training set regarding false positives and false negatives.</p>

2021 ◽  
Author(s):  
Magdalena Mittermeier ◽  
Émilie Bresson ◽  
Dominique Paquin ◽  
Ralf Ludwig

<p>Climate change is altering the Earth’s atmospheric circulation and the dynamic drivers of extreme events. Extreme weather events pose a great potential risk to infrastructure and human security. In Montréal (Québec, Canada) long-duration mixed precipitation events (freezing rain and/or ice pellets) are high-impact cold-season hazards and an understanding of how climate change alters their occurrence is of high societal interest.</p><p>Here, we introduce a two-staged deep learning approach that uses the synoptic-scale drivers of mixed precipitation to identify these extreme events in archived climate model data. The approach is destined for the application on regional climate model (RCM) data over the Montréal area. The dominant dynamic mechanism leading to mixed precipitation in Montréal is pressure-driven channeling of winds along the St. Lawrence river valley. The identification of the synoptic-scale pressure pattern related to pressure-driven channeling is a visual image classification task that is addressed with supervised machine learning. A convolutional neural network (CNN) is trained on the classification of the synoptic-scale pressure patterns by using a large training database derived from an ensemble of the Canadian Regional Climate Model version 5 (CRCM5). The CRCM5 is to our knowledge the only RCM available so far that employs the diagnostic method by Bourgouin to simulate mixed precipitation inline and thus delivers training examples and labels for this supervised classification task.</p><p>The CNN correctly identifies 90 % of the Bourgouin mixed precipitation cases in the test set. The weak point of the approach is a high type I error, which is enhanced in a second stage by applying a temperature condition. The evaluation on an CRCM5 run driven by ERA-Interim reanalysis reveals a still low precision of 21 % and thus a Matthews correlation coefficient of 0.39. The deep learning approach can be applied to ensembles of regional climate models on the North America grid of the Coordinated Regional Downscaling Experiment (CORDEX-NA).</p>


2020 ◽  
Author(s):  
Jason A. Lowe ◽  
Carol McSweeney ◽  
Chris Hewitt

<p>There is clear evidence that, even with the most favourable emission pathways over coming decades, there will be a need for society to adapt to the impacts of climate variability and change. To do this regional, national and local actors need up-to-date information on the changing climate with clear accompanying detail on the robustness of the information. This needs to be communicated to both public and private sector organisations, ideally as part of a process of co-developing solutions.</p><p>EUCP is an H2020 programme that began in December 2017 with the aim of researching and testing the provision of improved climate predictions and projections for Europe for the next 40+ years, and drawing on the expertise of researchers from a number of major climate research institutes across Europe. It is also engaging with users of climate change information through a multiuser forum (MUF) to ensure that what we learn will match the needs of the people who need if for decision making and planning.</p><p>The first big issue that EUCP seeks to address is how better to use ensembles of climate model projections, moving beyond the one-model-one-vote philosophy. Here, the aim is to better understand how model ensembles might be constrained or sub-selected, and how multiple strands of information might be combined into improved climate change narratives or storylines. The second area where EUCP is making progress is in the use of very high-resolution regional climate simulations that are capable of resolving aspects of atmospheric convection. Present day and future simulations from a new generation of regional models ae being analysed in EUCP and will be used in a number of relevant case studies. The third issue that EUCP will consider is how to make future simulations more seamless across those time scales that are most relevant user decision making. This includes generating a better understanding of predictability over time and its sources in initialised forecasts, and also how to transition from the initialised forecasts to longer term boundary forced climate projections.</p><p>This presentation will provide an overview of the challenges being addressed by EUCP and the approaches the project is using.</p><p><br><br></p><p> </p>


2021 ◽  
Author(s):  
Daichi Kitaguchi ◽  
Toru Fujino ◽  
Nobuyoshi Takeshita ◽  
Hiro Hasegawa ◽  
Kensaku Mori ◽  
...  

Abstract Clarifying the scalability of deep-learning-based surgical instrument segmentation networks in diverse surgical environments is important in recognizing the challenges of overfitting in surgical device development. This study comprehensively evaluated deep neural network scalability for surgical instrument segmentation, using 5238 images randomly extracted from 128 intraoperative videos. The video dataset contained 112 laparoscopic colorectal resection, 5 laparoscopic distal gastrectomy, 5 laparoscopic cholecystectomy, and 6 laparoscopic partial hepatectomy cases. Deep-learning-based surgical instrument segmentation was performed for test sets with 1) the same conditions as the training set; 2) the same recognition target surgical instrument and surgery type but different laparoscopic recording systems; 3) the same laparoscopic recording system and surgery type but slightly different recognition target laparoscopic surgical forceps; 4) the same laparoscopic recording system and recognition target surgical instrument but different surgery types. The mean average precision and mean intersection over union for test sets 1, 2, 3, and 4 were 0.941 and 0.887, 0.866 and 0.671, 0.772 and 0.676, and 0.588 and 0.395, respectively. Therefore, the recognition accuracy decreased even under slightly different conditions. To enhance the generalization of deep neural networks in surgery, constructing a training set that considers diverse surgical environments under real-world conditions is crucial. Trial Registration Number: 2020–315, date of registration: October 5, 2020


Author(s):  
Seung-Geon Lee ◽  
Jaedeok Kim ◽  
Hyun-Joo Jung ◽  
Yoonsuck Choe

Estimating the relative importance of each sample in a training set has important practical and theoretical value, such as in importance sampling or curriculum learning. This kind of focus on individual samples invokes the concept of samplewise learnability: How easy is it to correctly learn each sample (cf. PAC learnability)? In this paper, we approach the sample-wise learnability problem within a deep learning context. We propose a measure of the learnability of a sample with a given deep neural network (DNN) model. The basic idea is to train the given model on the training set, and for each sample, aggregate the hits and misses over the entire training epochs. Our experiments show that the samplewise learnability measure collected this way is highly linearly correlated across different DNN models (ResNet-20, VGG-16, and MobileNet), suggesting that such a measure can provide deep general insights on the data’s properties. We expect our method to help develop better curricula for training, and help us better understand the data itself.


2017 ◽  
Vol 10 (5) ◽  
pp. 1849-1872 ◽  
Author(s):  
Benoit P. Guillod ◽  
Richard G. Jones ◽  
Andy Bowery ◽  
Karsten Haustein ◽  
Neil R. Massey ◽  
...  

Abstract. Extreme weather events can have large impacts on society and, in many regions, are expected to change in frequency and intensity with climate change. Owing to the relatively short observational record, climate models are useful tools as they allow for generation of a larger sample of extreme events, to attribute recent events to anthropogenic climate change, and to project changes in such events into the future. The modelling system known as weather@home, consisting of a global climate model (GCM) with a nested regional climate model (RCM) and driven by sea surface temperatures, allows one to generate a very large ensemble with the help of volunteer distributed computing. This is a key tool to understanding many aspects of extreme events. Here, a new version of the weather@home system (weather@home 2) with a higher-resolution RCM over Europe is documented and a broad validation of the climate is performed. The new model includes a more recent land-surface scheme in both GCM and RCM, where subgrid-scale land-surface heterogeneity is newly represented using tiles, and an increase in RCM resolution from 50 to 25 km. The GCM performs similarly to the previous version, with some improvements in the representation of mean climate. The European RCM temperature biases are overall reduced, in particular the warm bias over eastern Europe, but large biases remain. Precipitation is improved over the Alps in summer, with mixed changes in other regions and seasons. The model is shown to represent the main classes of regional extreme events reasonably well and shows a good sensitivity to its drivers. In particular, given the improvements in this version of the weather@home system, it is likely that more reliable statements can be made with regards to impact statements, especially at more localized scales.


2016 ◽  
Author(s):  
Benoit P. Guillod ◽  
Andy Bowery ◽  
Karsten Haustein ◽  
Richard G. Jones ◽  
Neil R. Massey ◽  
...  

Abstract. Extreme weather events can have large impacts on society and, in many regions, are expected to change in frequency and intensity with climate change. Owing to the relatively short observational record, climate models are useful tools as they allow for generation of a larger sample of extreme events, to attribute recent events to anthropogenic climate change, and to project changes of such events into the future. The modelling system known as weather@home, consisting of a global climate model (GCM) with a nested regional climate model (RCM) and driven by sea surface temperatures, allows to generate very large ensemble with the help of volunteer distributed computing. This is a key tool to understanding many aspects of extreme events. Here, a new version of weather@home system (weather@home 2) with a higher resolution RCM over Europe is documented and a broad validation of the climate is performed. The new model includes a more recent land-surface scheme in both GCM and RCM, where subgrid scale land surface heterogeneity is newly represented using tiles, and an increase in RCM resolution from 50 km to 25 km. The GCM performs similarly to the previous version, with some improvements in the representation of mean climate. The European RCM biases are overall reduced, in particular the warm and dry bias over eastern Europe, but large biases remain. The model is shown to represent main classes of regional extreme events reasonably well and shows a good sensitivity to its drivers. In particular, given the improvements in this version of the weather@home system, it is likely that more reliable statements can be made with regards to impact statements, especially at more localized scales.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Zhili Wang ◽  
Lei Lin ◽  
Yangyang Xu ◽  
Huizheng Che ◽  
Xiaoye Zhang ◽  
...  

AbstractAnthropogenic aerosol (AA) forcing has been shown as a critical driver of climate change over Asia since the mid-20th century. Here we show that almost all Coupled Model Intercomparison Project Phase 6 (CMIP6) models fail to capture the observed dipole pattern of aerosol optical depth (AOD) trends over Asia during 2006–2014, last decade of CMIP6 historical simulation, due to an opposite trend over eastern China compared with observations. The incorrect AOD trend over China is attributed to problematic AA emissions adopted by CMIP6. There are obvious differences in simulated regional aerosol radiative forcing and temperature responses over Asia when using two different emissions inventories (one adopted by CMIP6; the other from Peking university, a more trustworthy inventory) to driving a global aerosol-climate model separately. We further show that some widely adopted CMIP6 pathways (after 2015) also significantly underestimate the more recent decline in AA emissions over China. These flaws may bring about errors to the CMIP6-based regional climate attribution over Asia for the last two decades and projection for the next few decades, previously anticipated to inform a wide range of impact analysis.


Sign in / Sign up

Export Citation Format

Share Document