forecast validation
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 5)

H-INDEX

7
(FIVE YEARS 0)

2022 ◽  
Vol 16 (1) ◽  
pp. 61-85
Author(s):  
Emma K. Fiedler ◽  
Matthew J. Martin ◽  
Ed Blockley ◽  
Davi Mignac ◽  
Nicolas Fournier ◽  
...  

Abstract. The feasibility of assimilating sea ice thickness (SIT) observations derived from CryoSat-2 along-track measurements of sea ice freeboard is successfully demonstrated using a 3D-Var assimilation scheme, NEMOVAR, within the Met Office's global, coupled ocean–sea-ice model, Forecast Ocean Assimilation Model (FOAM). The CryoSat-2 Arctic freeboard measurements are produced by the Centre for Polar Observation and Modelling (CPOM) and are converted to SIT within FOAM using modelled snow depth. This is the first time along-track observations of SIT have been used in this way, with other centres assimilating gridded and temporally averaged observations. The assimilation leads to improvements in the SIT analysis and forecast fields generated by FOAM, particularly in the Canadian Arctic. Arctic-wide observation-minus-background assimilation statistics for 2015–2017 show improvements of 0.75 m mean difference and 0.41 m root-mean-square difference (RMSD) in the freeze-up period and 0.46 m mean difference and 0.33 m RMSD in the ice break-up period. Validation of the SIT analysis against independent springtime in situ SIT observations from NASA Operation IceBridge (OIB) shows improvement in the SIT analysis of 0.61 m mean difference (0.42 m RMSD) compared to a control without SIT assimilation. Similar improvements are seen in the FOAM 5 d SIT forecast. Validation of the SIT assimilation with independent Beaufort Gyre Exploration Project (BGEP) sea ice draft observations does not show an improvement, since the assimilated CryoSat-2 observations compare similarly to the model without assimilation in this region. Comparison with airborne electromagnetic induction (Air-EM) combined measurements of SIT and snow depth shows poorer results for the assimilation compared to the control, despite covering similar locations to the OIB and BGEP datasets. This may be evidence of sampling uncertainty in the matchups with the Air-EM validation dataset, owing to the limited number of observations available over the time period of interest. This may also be evidence of noise in the SIT analysis or uncertainties in the modelled snow depth, in the assimilated SIT observations, or in the data used for validation. The SIT analysis could be improved by upgrading the observation uncertainties used in the assimilation. Despite the lack of CryoSat-2 SIT observations available for assimilation over the summer due to the detrimental effect of melt ponds on retrievals, it is shown that the model is able to retain improvements to the SIT field throughout the summer months due to prior, wintertime SIT assimilation. This also results in regional improvements to the July modelled sea ice concentration (SIC) of 5 % RMSD in the European sector, due to slower melt of the thicker sea ice.



2021 ◽  
Author(s):  
Xinjia Hu ◽  
Jan Eichner ◽  
Eberhard Faust ◽  
Holger Kantz

AbstractReliable El Niño Southern Oscillation (ENSO) prediction at seasonal-to-interannual lead times would be critical for different stakeholders to conduct suitable management. In recent years, new methods combining climate network analysis with El Niño prediction claim that they can predict El Niño up to 1 year in advance by overcoming the spring barrier problem (SPB). Usually this kind of method develops an index representing the relationship between different nodes in El Niño related basins, and the index crossing a certain threshold is taken as the warning of an El Niño event in the next few months. How well the prediction performs should be measured in order to estimate any improvements. However, the amount of El Niño recordings in the available data is limited, therefore it is difficult to validate whether these methods are truly predictive or their success is merely a result of chance. We propose a benchmarking method by surrogate data for a quantitative forecast validation for small data sets. We apply this method to a naïve prediction of El Niño events based on the Oscillation Niño Index (ONI) time series, where we build a data-based prediction scheme using the index series itself as input. In order to assess the network-based El Niño prediction method, we reproduce two different climate network-based forecasts and apply our method to compare the prediction skill of all these. Our benchmark shows that using the ONI itself as input to the forecast does not work for moderate lead times, while at least one of the two climate network-based methods has predictive skill well above chance at lead times of about one year.



2021 ◽  
Author(s):  
Karma Tsering ◽  
Manish Shrestha ◽  
Kiran Shakya ◽  
Birendra Bajracharya ◽  
Mir Matin ◽  
...  

AbstractThe Hindu Kush Himalayan region is extremely susceptible to periodic monsoon floods. Early warning systems with the ability to predict floods in advance can benefit tens of millions of people living in the region. Two web-based flood forecasting tools (ECMWF-SPT and HIWAT-SPT) are therefore developed and deployed jointly by SERVIR-HKH and NASA-AST to provide early warning to Bangladesh, Bhutan, and Nepal. ECMWF-SPT provides ensemble forecast up to 15-day lead time, whereas HIWAT-SPT provides deterministic forecast up to 3-day lead time covering almost 100% of the rivers. Hydrological models in conjunction with forecast validation contribute not only to advancing the processes of a forecasting system, but also objectively assess the joint distribution of forecasts and observations in quantifying forecast accuracy. The validation of forecast products has emerged as a priority need to evaluate the worth of the predictive information in terms of quality and consistency. This paper describes the effort made in developing the hydrological forecast systems, the current state of the flood forecast services, and the performance of the forecast evaluation. Both tools are validated using a selection of appropriate metrics in measurement in both probabilistic and deterministic space. The numerical metrics are further complemented by graphical representations of scores and probabilities. It was found that the models had a good performance in capturing high flood events. The evaluation across multiple locations indicates that the model performance and forecast goodness are variable on spatiotemporal scale. The resulting information is used to support good decision-making in risk and resource management.



2021 ◽  
Author(s):  
Xinjia Hu ◽  
Jan Eichner ◽  
Eberhard Faust ◽  
Holger Kantz

Abstract Reliable El Niño Southern Oscillation (ENSO) prediction at seasonal-to-interannual lead times would be critical for different stakeholders to conduct suitable management. In recent years, new methods combining climate network analysis with El Niño prediction claim that they can predict El Niño up to 1 year in advance by overcoming the spring barrier problem (SPB). Usually this kind of method develops an index representing the relationship between different nodes in El Niño related basins, and the index crossing a certain threshold is taken as the warning of an El Niño event in the next few months. How well the prediction performs should be measured in order to estimate any improvements. However, the amount of El Niño recordings in the available data is limited, therefore it is difficult to validate whether these methods are truly predictive or their success is merely a result of chance. We propose a benchmarking method by new surrogate data for a quantitative forecast validation for small data sets. We apply this method to a naïve prediction of El Niño events based on the Oscillation Niño Index (ONI) time series, where we build a data-based prediction scheme using the index series itself as input. In order to assess the network-based El Niño prediction method, we reproduce two different climate network-based forecasts and apply our method to compare the prediction skill of all these. Our benchmark shows that using the ONI itself as input to the forecast does not work for moderate lead times, while at least one of the two climate network-based methods has predictive skill well above 30 chance at lead times of about one year.



2021 ◽  
Author(s):  
Mark Fuglem ◽  
Paul Stuckey ◽  
Ian Turnbull ◽  
Jan Thijssen ◽  
Yujian Huang

Abstract When planning oil and gas exploration and production operations off the east coast of Canada, the potential for iceberg impacts must be considered. Environmental conditions in this region can be very harsh, and iceberg trajectories are notably unpredictable. When an iceberg has the potential to impact a Floating Production, Storage, and Offloading (FPSO) platform, ice management through towing will be attempted; and if this fails, the production system will be shut down, line flushed, the mooring and riser systems disconnected, and the platform moved off site. If trajectory forecasting were highly accurate, only icebergs passing very close to the platform would require ice management and possible shutdown of the platform. Given natural variations in wind, currents, and waves, and challenges measuring and forecasting these parameters, there is considerable forecast uncertainty. This results in added expenses for extra ice management and unnecessary shutdowns. Improvements in trajectory forecasting accuracy, characterization of forecast uncertainty, and methods to account for these uncertainties in operations would all be beneficial. This paper outlines an approach for simulating large numbers of iceberg trajectories in varied and realistic environmental conditions from hindcast met-ocean data in conjunction with a forecasting uncertainty model derived from forecast validation studies. A model, named BergCast, was developed so that proposed strategies for improving ice management operations can be evaluated, and the value of reducing forecasting uncertainty quantified.



Energies ◽  
2019 ◽  
Vol 12 (23) ◽  
pp. 4409 ◽  
Author(s):  
Mari R. Tye ◽  
Sue Ellen Haupt ◽  
Eric Gilleland ◽  
Christina Kalb ◽  
Tara Jensen

With electricity representing around 20% of the global energy demand, and increasing support for renewable sources of electricity, there is also an escalating need to improve solar forecasts to support power management. While considerable research has been directed to statistical methods to improve solar power forecasting, few have employed finite mixture distributions. A statistically-objective classification of the overall sky condition may lead to improved forecasts. Combining information from the synoptic driving conditions for daily variability with local processes controlling subdaily fluctuations could assist with forecast validation and enhancement where few observations are available. Gaussian mixture models provide a statistical learning approach to automatically identify prevalent sky conditions (clear, semi-cloudy, and cloudy) and explore associated weather patterns. Here a first stage in the development of such a model is presented: examining whether there is sufficient information in the large-scale environment to identify days with clear, semi-cloudy, or cloudy conditions. A three-component Gaussian distribution is developed that reproduces the observed multimodal peaks in sky clearness indices, and their temporal distribution. Posterior probabilities from the fitted mixture distributions are used to identify periods of clear, partially-cloudy, and cloudy skies. Composites of low-level (850 hPa) humidity and winds for each of the mixture components reveal three patterns associated with the typical synoptic conditions governing the sky clarity, and hence, potential solar power.



2017 ◽  
Vol 10 (2) ◽  
pp. 409-429 ◽  
Author(s):  
Tobias Sirch ◽  
Luca Bugliaro ◽  
Tobias Zinner ◽  
Matthias Möhrlein ◽  
Margarita Vazquez-Navarro

Abstract. A novel approach for the nowcasting of clouds and direct normal irradiance (DNI) based on the Spinning Enhanced Visible and Infrared Imager (SEVIRI) aboard the geostationary Meteosat Second Generation (MSG) satellite is presented for a forecast horizon up to 120 min. The basis of the algorithm is an optical flow method to derive cloud motion vectors for all cloudy pixels. To facilitate forecasts over a relevant time period, a classification of clouds into objects and a weighted triangular interpolation of clear-sky regions are used. Low and high level clouds are forecasted separately because they show different velocities and motion directions. Additionally a distinction in advective and convective clouds together with an intensity correction for quickly thinning convective clouds is integrated. The DNI is calculated from the forecasted optical thickness of the low and high level clouds. In order to quantitatively assess the performance of the algorithm, a forecast validation against MSG/SEVIRI observations is performed for a period of 2 months. Error rates and Hanssen–Kuiper skill scores are derived for forecasted cloud masks. For a forecast of 5 min for most cloud situations more than 95 % of all pixels are predicted correctly cloudy or clear. This number decreases to 80–95 % for a forecast of 2 h depending on cloud type and vertical cloud level. Hanssen–Kuiper skill scores for cloud mask go down to 0.6–0.7 for a 2 h forecast. Compared to persistence an improvement of forecast horizon by a factor of 2 is reached for all forecasts up to 2 h. A comparison of forecasted optical thickness distributions and DNI against observations yields correlation coefficients larger than 0.9 for 15 min forecasts and around 0.65 for 2 h forecasts.



2016 ◽  
Vol 28 (3) ◽  
pp. 225-234
Author(s):  
Xinjun Lai ◽  
Jun Li ◽  
Zhi Li

A subpath-based methodology is proposed to capture the travellers’ route choice behaviours and their perceptual correlation of routes, because the original link-based style may not be suitable in application: (1) travellers do not process road network information and construct the chosen route by a link-by-link style; (2) observations from questionnaires and GPS data, however, are not always link-specific. Subpaths are defined as important portions of the route, such as major roads and landmarks. The cross-nested Logit (CNL) structure is used for its tractable closed-form and its capability to explicitly capture the routes correlation. Nests represent subpaths other than links so that the number of nests is significantly reduced. Moreover, the proposed method simplifies the original link-based CNL model; therefore, it alleviates the estimation and computation difficulties. The estimation and forecast validation with real data are presented, and the results suggest that the new method is practical.



Author(s):  
Richard Perez ◽  
James Schlemmer ◽  
Karl Hemker ◽  
Sergey Kivalov ◽  
Adam Kankiewicz ◽  
...  


2016 ◽  
Vol 31 (3) ◽  
pp. 811-825 ◽  
Author(s):  
Nathan Snook ◽  
Youngsun Jung ◽  
Jerald Brotzge ◽  
Bryan Putnam ◽  
Ming Xue

Abstract Despite recent advances in storm-scale ensemble NWP, short-term (0–90 min) explicit forecasts of severe hail remain a major challenge as a result of the fast evolution and short time scales of hail-producing convective storms and the substantial uncertainty associated with the microphysical representation of hail. In this study, 0–90-min ensemble hail forecasts for the supercell storms of 20 May 2013 over central Oklahoma are examined and verified, with the goals of 1) evaluating ensemble forecast performance, 2) comparing the advantages and limitations of different forecast fields potentially suitable for the prediction of hail and severe hail in a Warn-on-Forecast setting, and 3) evaluating the use of dual-polarization radar observations for hail forecast validation. To address the challenges of hail prediction and to produce skillful forecasts, the ensemble uses a two-moment microphysics scheme that explicitly predicts a hail-like rimed-ice category and is run with a grid spacing of 500 m. Radar reflectivity factor and radial velocity, along with surface observations, are assimilated every 5 min for 1 h as the storms were developing to maturity, followed by a 90-min ensemble forecast. Several methods of hail prediction and hail forecast verification are then examined, including the prediction of the maximum hail size compared to Storm Prediction Center (SPC) and Meteorological Phenomena Identification Near the Ground (mPING) hail observations, and verification of model data against single- and dual-polarization radar-derived fields including hydrometeor classification algorithm (HCA) output and the maximum estimated size of hail (MESH). The 0–90-min ensemble hail predictions are found to be marginally to moderately skillful depending on the verification method used.



Sign in / Sign up

Export Citation Format

Share Document