Using deep learning to nowcast the spatial coverage of convection from Himawari-8 satellite data

Author(s):  
Ryan Lagerquist ◽  
Jebb Q. Stewart ◽  
Imme Ebert-Uphoff ◽  
Christina Kumler

AbstractPredicting the timing and location of thunderstorms (“convection”) allows for preventive actions that can save both lives and property. We have applied U-nets, a deep-learning-based type of neural network, to forecast convection on a grid at lead times up to 120 minutes. The goal is to make skillful forecasts with only present and past satellite data as predictors. Specifically, predictors are multispectral brightness-temperature images from the Himawari-8 satellite, while targets (ground truth) are provided by weather radars in Taiwan. U-nets are becoming popular in atmospheric science due to their advantages for gridded prediction. Furthermore, we use three novel approaches to advance U-nets in atmospheric science. First, we compare three architectures – vanilla, temporal, and U-net++ – and find that vanilla U-nets are best for this task. Second, we train U-nets with the fractions skill score, which is spatially aware, as the loss function. Third, because we do not have adequate ground truth over the full Himawari-8 domain, we train the U-nets with small radar-centered patches, then apply trained U-nets to the full domain. Also, we find that the best predictions are given by U-nets trained with satellite data from multiple lag times, not only the present. We evaluate U-nets in detail – by time of day, month, and geographic location – and compare to persistence models. The U-nets outperform persistence at lead times ≥ 60 minutes, and at all lead times the U-nets provide a more realistic climatology than persistence. Our code is available publicly.

2020 ◽  
Vol 12 (15) ◽  
pp. 2368 ◽  
Author(s):  
Annett Bartsch ◽  
Georg Pointner ◽  
Thomas Ingeman-Nielsen ◽  
Wenjun Lu

Infrastructure expands rapidly in the Arctic due to industrial development. At the same time, climate change impacts are pronounced in the Arctic. Ground temperatures are, for example, increasing as well as coastal erosion. A consistent account of the current human footprint is needed in order to evaluate the impact on the environments as well as risk for infrastructure. Identification of roads and settlements with satellite data is challenging due to the size of single features and low density of clusters. Spatial resolution and spectral characteristics of satellite data are the main issues regarding their separation. The Copernicus Sentinel-1 and -2 missions recently provided good spatial coverage and at the same time comparably high pixel spacing starting with 10 m for modes available across the entire Arctic. The purpose of this study was to assess the capabilities of both, Sentinel-1 C-band Synthetic Aperture Radar (SAR) and the Sentinel-2 multispectral information for Arctic focused mapping. Settings differ across the Arctic (historic settlements versus industrial, locations on bedrock versus tundra landscapes) and reference data are scarce and inconsistent. The type of features and data scarcity demand specific classification approaches. The machine learning approaches Gradient Boosting Machines (GBM) and deep learning (DL)-based semantic segmentation have been tested. Records for the Alaskan North Slope, Western Greenland, and Svalbard in addition to high-resolution satellite data have been used for validation and calibration. Deep learning is superior to GBM with respect to users accuracy. GBM therefore requires comprehensive postprocessing. SAR provides added value in case of GBM. VV is of benefit for road identification and HH for detection of buildings. Unfortunately, the Sentinel-1 acquisition strategy is varying across the Arctic. The majority is covered in VV+VH only. DL is of benefit for road and building detection but misses large proportions of other human-impacted areas, such as gravel pads which are typical for gas and oil fields. A combination of results from both GBM (Sentinel-1 and -2 combined) and DL (Sentinel-2; Sentinel-1 optional) is therefore suggested for circumpolar mapping.


Author(s):  
William E. Chapman ◽  
Luca Delle Monache ◽  
Stefano Alessandrini ◽  
Aneesh C. Subramanian ◽  
F. Martin Ralph ◽  
...  

AbstractDeep Learning (DL) post-processing methods are examined to obtain reliable and accurate probabilistic forecasts from single-member numerical weather predictions of integrated vapor transport (IVT). Using a 34-year reforecast, based on the Center for Western Weather and Water Extremes West-WRF mesoscale model of North American West Coast IVT, the dynamically/statistically derived 0-120 hour probabilistic forecasts for IVT under atmospheric river (AR) conditions are tested. These predictions are compared to the Global Ensemble Forecast System (GEFS) dynamic model and the GEFS calibrated with a neural network. Additionally, the DL methods are tested against an established, but more rigid, statistical-dynamical ensemble method (the Analog Ensemble). The findings show, using continuous ranked probability skill score and Brier skill score as verification metrics, that the DL methods compete with or outperform the calibrated GEFS system at lead times from 0-48 hours and again from 72-120 hours for AR vapor transport events. Additionally, the DL methods generate reliable and skillful probabilistic forecasts. The implications of varying the length of the training dataset are examined and the results show that the DL methods learn relatively quickly and ~10 years of hindcast data are required to compete with the GEFS ensemble.


2020 ◽  
Vol 35 (5) ◽  
pp. 1845-1863 ◽  
Author(s):  
Shawn L. Handler ◽  
Heather D. Reeves ◽  
Amy McGovern

ABSTRACTIn this study, a machine learning algorithm for generating a gridded CONUS-wide probabilistic road temperature forecast is presented. A random forest is used to tie a combination of HRRR model surface variables and information about the geographic location and time of day per year to observed road temperatures. This approach differs from its predecessors in that road temperature is not deterministic (i.e., provides a forecast of a specific road temperature), but rather it is probabilistic, providing a 0%–100% probability that the road temperature is subfreezing. This approach can account for the varying controls on road temperature that are not easily known or able to be accounted for in physical models, such as amount of traffic, road composition, and differential shading by surrounding buildings and terrain. The algorithm is trained using road temperature observations from one winter season (October 2016–March 2017) and calibrated/evaluated using observations from the following winter season (October 2017–March 2018). Case-study analyses show the algorithm performs well for various scenarios and captures the temporal and spatial evolution of the probability of subfreezing roads reliably. Statistical evaluation for the predicted probabilities shows good skill as the mean area under the receiver operating characteristics curve is 0.96 and the Brier skill score is 0.66 for a 2-h forecast and only degrades slightly as lead time is increased. Additionally, the algorithm produces well-calibrated probabilities, and consistent discrimination between clearly above-freezing and subfreezing environments.


2017 ◽  
Author(s):  
Philippe Poulin ◽  
Marc-Alexandre Côté ◽  
Jean-Christophe Houde ◽  
Laurent Petit ◽  
Peter F. Neher ◽  
...  

AbstractWe show that deep learning techniques can be applied successfully to fiber tractography. Specifically, we use feed-forward and recurrent neural networks to learn the generation process of streamlines directly from diffusion-weighted imaging (DWI) data. Furthermore, we empirically study the behavior of the proposed models on a realistic white matter phantom with known ground truth. We show that their performance is competitive to that of commonly used techniques, even when the models are used on DWI data unseen at training time. We also show that our models are able to recover high spatial coverage of the ground truth white matter pathways while better controlling the number of false connections. In fact, our experiments suggest that exploiting past information within a streamline's trajectory during tracking helps predict the following direction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


2020 ◽  
Vol 22 (Supplement_3) ◽  
pp. iii359-iii359
Author(s):  
Lydia Tam ◽  
Edward Lee ◽  
Michelle Han ◽  
Jason Wright ◽  
Leo Chen ◽  
...  

Abstract BACKGROUND Brain tumors are the most common solid malignancies in childhood, many of which develop in the posterior fossa (PF). Manual tumor measurements are frequently required to optimize registration into surgical navigation systems or for surveillance of nonresectable tumors after therapy. With recent advances in artificial intelligence (AI), automated MRI-based tumor segmentation is now feasible without requiring manual measurements. Our goal was to create a deep learning model for automated PF tumor segmentation that can register into navigation systems and provide volume output. METHODS 720 pre-surgical MRI scans from five pediatric centers were divided into training, validation, and testing datasets. The study cohort comprised of four PF tumor types: medulloblastoma, diffuse midline glioma, ependymoma, and brainstem or cerebellar pilocytic astrocytoma. Manual segmentation of the tumors by an attending neuroradiologist served as “ground truth” labels for model training and evaluation. We used 2D Unet, an encoder-decoder convolutional neural network architecture, with a pre-trained ResNet50 encoder. We assessed ventricle segmentation accuracy on a held-out test set using Dice similarity coefficient (0–1) and compared ventricular volume calculation between manual and model-derived segmentations using linear regression. RESULTS Compared to the ground truth expert human segmentation, overall Dice score for model performance accuracy was 0.83 for automatic delineation of the 4 tumor types. CONCLUSIONS In this multi-institutional study, we present a deep learning algorithm that automatically delineates PF tumors and outputs volumetric information. Our results demonstrate applied AI that is clinically applicable, potentially augmenting radiologists, neuro-oncologists, and neurosurgeons for tumor evaluation, surveillance, and surgical planning.


2021 ◽  
pp. 1-14
Author(s):  
Waqas Yousaf ◽  
Arif Umar ◽  
Syed Hamad Shirazi ◽  
Zakir Khan ◽  
Imran Razzak ◽  
...  

Automatic logo detection and recognition is significantly growing due to the increasing requirements of intelligent documents analysis and retrieval. The main problem to logo detection is intra-class variation, which is generated by the variation in image quality and degradation. The problem of misclassification also occurs while having tiny logo in large image with other objects. To address this problem, Patch-CNN is proposed for logo recognition which uses small patches of logos for training to solve the problem of misclassification. The classification is accomplished by dividing the logo images into small patches and threshold is applied to drop no logo area according to ground truth. The architectures of AlexNet and ResNet are also used for logo detection. We propose a segmentation free architecture for the logo detection and recognition. In literature, the concept of region proposal generation is used to solve logo detection, but these techniques suffer in case of tiny logos. Proposed CNN is especially designed for extracting the detailed features from logo patches. So far, the technique has attained accuracy equals to 0.9901 with acceptable training and testing loss on the dataset used in this work.


Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.


Sign in / Sign up

Export Citation Format

Share Document