Calibration of solar radiation ensemble forecasts using convolutional neural network

Author(s):  
Florian Dupuy ◽  
Yen-Sen Lu ◽  
Garrett Good ◽  
Michaël Zamo

<p><span>E</span><span>nsemble </span><span>forecast </span><span>approaches have become state-of-the-art for the quantification of weather forecast uncertainty. </span><span>However</span><span>, ensemble forecasts </span><span>from</span><span> numerical weather prediction models (NWPs) still tend to be biased and underdispersed, </span>hence justifying the use of statistical post-processing techniques <span>to improve forecast skill. </span></p><p>In this study, ensemble forecasts are post-processed using a convolutional neural network (CNN). CNNs are the most popular machine learning tool to deal with images. In our case, CNNs allow to integrate information from spatial patterns contained in NWP outputs.</p><p>We focus on solar radiation forecasts for 48 hours ahead over Europe from the 35-members ARPEGE (Météo-France global NWP) and a 512-members WRF (Weather Research and Forecasting) ensembles. We used a U-Net (a special kind of CNN) designed to produce a probabilistic forecast (quantiles) using as ground truth the CAMS (Copernicus Atmosphere Monitoring System) radiation service dataset with a spatial resolution of 0.2°.</p>

2020 ◽  
Author(s):  
Stephan Hemri ◽  
Christoph Spirig ◽  
Jonas Bhend ◽  
Lionel Moret ◽  
Mark Liniger

<p>Over the last decades ensemble approaches have become state-of-the-art for the quantification of weather forecast uncertainty. Despite ongoing improvements, ensemble forecasts issued by numerical weather prediction models (NWPs) still tend to be biased and underdispersed. Statistical postprocessing has proven to be an appropriate tool to correct biases and underdispersion, and hence to improve forecast skill. Here we focus on multi-model postprocessing of cloud cover forecasts in Switzerland. In order to issue postprocessed forecasts at any point in space, ensemble model output statistics (EMOS) models are trained and verified against EUMETSAT CM SAF satellite data with a spatial resolution of around 2 km over Switzerland. Training with a minimal record length of the past 45 days of forecast and observation data already produced an EMOS model improving direct model output (DMO). Training on a 3 years record of the corresponding season further improved the performance. We evaluate how well postprocessing corrects the most severe forecast errors, like missing fog and low level stratus in winter. For such conditions, postprocessing of cloud cover benefits strongly from incorporating additional predictors into the postprocessing suite. A quasi-operational prototype has been set up and was used to explore meteogram-like visualizations of probabilistic cloud cover forecasts.</p>


Author(s):  
Rochelle P. Worsnop ◽  
Michael Scheuerer ◽  
Francesca Di Giuseppe ◽  
Christopher Barnard ◽  
Thomas M. Hamill ◽  
...  

AbstractWildfire guidance two weeks ahead is needed for strategic planning of fire mitigation and suppression. However, fire forecasts driven by meteorological forecasts from numerical weather prediction models inherently suffer from systematic biases. This study uses several statistical-postprocessing methods to correct these biases and increase the skill of ensemble fire forecasts over the contiguous United States 8–14 days ahead. We train and validate the post-processing models on 20 years of European Centre for Medium-range Weather Forecast (ECMWF) reforecasts and ERA5 reanalysis data for 11 meteorological variables related to fire, such as surface temperature, wind speed, relative humidity, cloud cover, and precipitation. The calibrated variables are then input to the Global ECMWF Fire Forecast (GEFF) system to produce probabilistic forecasts of daily fire-indicators which characterize the relationships between fuels, weather, and topography. Skill scores show that the post-processed forecasts overall have greater positive skill at Days 8–14 relative to raw and climatological forecasts. It is shown that the post-processed forecasts are more reliable at predicting above- and below-normal probabilities of various fire indicators than the raw forecasts and that the greatest skill for Days 8–14 is achieved by aggregating forecast days together.


Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Mathieu Serrurier ◽  
Valentin Kivachuk Burdá ◽  
Michaël Zamo ◽  
...  

AbstractCloud cover provides crucial information for many applications such as planning land observation missions from space. It remains however a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In this study, ARPEGE (Météo-France global NWP) cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows the integration of spatial information contained in NWP outputs. We use a gridded cloud cover product derived from satellite observations over Europe as ground truth, and predictors are spatial fields of various variables produced by ARPEGE at the corresponding lead time. We show that a simple U-Net architecture (a particular type of CNN) produces significant improvements over Europe. Moreover, the U-Net outclasses more traditional machine learning methods used operationally such as a random forest and a logistic quantile regression. When using a large number of predictors, a first step toward interpretation is to produce a ranking of predictors by importance. Traditional methods of ranking (permutation importance, sequential selection, . . . ) need important computational resources. We introduced a weighting predictor layer prior to the traditional U-Net architecture in order to produce such a ranking. The small number of additional weights to train (the same as the number of predictors) does not impact the computational time, representing a huge advantage compared to traditional methods.


2020 ◽  
Author(s):  
Sam Allen ◽  
Chris Ferro ◽  
Frank Kwasniok

<p>Raw output from deterministic numerical weather prediction models is typically subject to systematic biases. Although ensemble forecasts provide invaluable information regarding the uncertainty in a prediction, they themselves often misrepresent the weather that occurs. Given their widespread use, the need for high-quality wind speed forecasts is well-documented. Several statistical approaches have therefore been proposed to recalibrate ensembles of wind speed forecasts, including a heteroscedastic censored regression approach. An extension to this method that utilises the prevailing atmospheric flow is implemented here in a quasigeostrophic simulation study and on reforecast data. It is hoped that this regime-dependent framework can alleviate errors owing to changes in the synoptic-scale atmospheric state. When the wind speed strongly depends on the underlying weather regime, the resulting forecasts have the potential to provide substantial improvements in skill upon conventional post-processing techniques. This is particularly pertinent at longer lead times, where there is more improvement to be gained upon current methods, and in weather regimes associated with wind speeds that differ greatly from climatology. In order to realise this potential, however, an accurate prediction of the future atmospheric regime is required.</p>


Author(s):  
Liang Kim Meng ◽  
Azira Khalil ◽  
Muhamad Hanif Ahmad Nizar ◽  
Maryam Kamarun Nisham ◽  
Belinda Pingguan-Murphy ◽  
...  

Background: Bone Age Assessment (BAA) refers to a clinical procedure that aims to identify a discrepancy between biological and chronological age of an individual by assessing the bone age growth. Currently, there are two main methods of executing BAA which are known as Greulich-Pyle and Tanner-Whitehouse techniques. Both techniques involve a manual and qualitative assessment of hand and wrist radiographs, resulting in intra and inter-operator variability accuracy and time-consuming. An automatic segmentation can be applied to the radiographs, providing the physician with more accurate delineation of the carpal bone and accurate quantitative analysis. Methods: In this study, we proposed an image feature extraction technique based on image segmentation with the fully convolutional neural network with eight stride pixel (FCN-8). A total of 290 radiographic images including both female and the male subject of age ranging from 0 to 18 were manually segmented and trained using FCN-8. Results and Conclusion: The results exhibit a high training accuracy value of 99.68% and a loss rate of 0.008619 for 50 epochs of training. The experiments compared 58 images against the gold standard ground truth images. The accuracy of our fully automated segmentation technique is 0.78 ± 0.06, 1.56 ±0.30 mm and 98.02% in terms of Dice Coefficient, Hausdorff Distance, and overall qualitative carpal recognition accuracy, respectively.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter M. Maloca ◽  
Philipp L. Müller ◽  
Aaron Y. Lee ◽  
Adnan Tufail ◽  
Konstantinos Balaskas ◽  
...  

AbstractMachine learning has greatly facilitated the analysis of medical data, while the internal operations usually remain intransparent. To better comprehend these opaque procedures, a convolutional neural network for optical coherence tomography image segmentation was enhanced with a Traceable Relevance Explainability (T-REX) technique. The proposed application was based on three components: ground truth generation by multiple graders, calculation of Hamming distances among graders and the machine learning algorithm, as well as a smart data visualization (‘neural recording’). An overall average variability of 1.75% between the human graders and the algorithm was found, slightly minor to 2.02% among human graders. The ambiguity in ground truth had noteworthy impact on machine learning results, which could be visualized. The convolutional neural network balanced between graders and allowed for modifiable predictions dependent on the compartment. Using the proposed T-REX setup, machine learning processes could be rendered more transparent and understandable, possibly leading to optimized applications.


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

<p>Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (Météo-France global NWP) at the corresponding lead time.</p><p>In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (< 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.</p>


This paper presents brain tumor detection and segmentation using image processing techniques. Convolutional neural networks can be applied for medical research in brain tumor analysis. The tumor in the MRI scans is segmented using the K-means clustering algorithm which is applied of every scan and the feed it to the convolutional neural network for training and testing. In our CNN we propose to use ReLU and Sigmoid activation functions to determine our end result. The training is done only using the CPU power and no GPU is used. The research is done in two phases, image processing and applying neural network.


Author(s):  
Oguz Akbilgic ◽  
Liam Butler ◽  
Ibrahim Karabayir ◽  
Patricia P Chang ◽  
Dalane W Kitzman ◽  
...  

Abstract Aims Heart failure (HF) is a leading cause of death. Early intervention is the key to reduce HF-related morbidity and mortality. This study assesses the utility of electrocardiograms (ECGs) in HF risk prediction. Methods and results Data from the baseline visits (1987–89) of the Atherosclerosis Risk in Communities (ARIC) study was used. Incident hospitalized HF events were ascertained by ICD codes. Participants with good quality baseline ECGs were included. Participants with prevalent HF were excluded. ECG-artificial intelligence (AI) model to predict HF was created as a deep residual convolutional neural network (CNN) utilizing standard 12-lead ECG. The area under the receiver operating characteristic curve (AUC) was used to evaluate prediction models including (CNN), light gradient boosting machines (LGBM), and Cox proportional hazards regression. A total of 14 613 (45% male, 73% of white, mean age ± standard deviation of 54 ± 5) participants were eligible. A total of 803 (5.5%) participants developed HF within 10 years from baseline. Convolutional neural network utilizing solely ECG achieved an AUC of 0.756 (0.717–0.795) on the hold-out test data. ARIC and Framingham Heart Study (FHS) HF risk calculators yielded AUC of 0.802 (0.750–0.850) and 0.780 (0.740–0.830). The highest AUC of 0.818 (0.778–0.859) was obtained when ECG-AI model output, age, gender, race, body mass index, smoking status, prevalent coronary heart disease, diabetes mellitus, systolic blood pressure, and heart rate were used as predictors of HF within LGBM. The ECG-AI model output was the most important predictor of HF. Conclusions ECG-AI model based solely on information extracted from ECG independently predicts HF with accuracy comparable to existing FHS and ARIC risk calculators.


Sign in / Sign up

Export Citation Format

Share Document