Global long-term sub-daily reanalysis of fluvial floods through high-resolution modeling

Author(s):  
Yuan Yang ◽  
Ming Pan ◽  
Peirong Lin ◽  
Hylke Beck ◽  
Dai Yamazaki ◽  
...  

<p>Flood is one of the most devastating natural disasters of severe societal, economic, and environmental consequences. Understanding the characteristics of floods, especially at fine spatial and short temporal scales, can be critical for improving forecast and risk management efforts. Due to the limited availability, in-situ observations have been inadequate for meeting the challenges at global extent. Existing global flood modeling efforts also lack the sufficient spatial/temporal resolutions for capturing rapid/local flood events, e.g., those developed in less than a day. Here we implement a carefully-designed modeling framework to reconstruct global river discharge at very high resolution (5-km and 3-hourly for runoff calculation and ~2.94 million river reaches derived from 90-m DEM for river routing) for 40 years (1979-2018). The Variable Infiltration Capacity (VIC) model with calibrated parameters, is coupled with the Routing Application for Parallel computation of Discharge (RAPID), serving as the core of the modeling framework. The state-of-the-art merged precipitation product, Multi-Source Weighted-Ensemble Precipitation (MSWEP) and flowlines vectorized from the MERIT Hydro are used. Pixel-level model calibration and distributional bias correction are performed against global runoff characteristics derived from observations and machine learning. Skill assessments are carried out both globally at daily sale and over contiguous U.S. (CONUS) at 3-hourly scale, using both general discharge performance metrics (Kling-Gupta Efficiency and it three components) and sub-daily flood-specific metrics (probability of detection, false alarm rate, flood volume error, peak magnitude error, timing error, etc.). The work here aims to provide some first-time understanding of local scale rapid flooding over the global domain. We also expect to learn more about the modeling tools developed for analyzing/monitoring fine scale flooding globally – their efficacy and lack thereof, why, and where to improve.</p>

2021 ◽  
Vol 13 (7) ◽  
pp. 1247
Author(s):  
Bowen Zhu ◽  
Xianhong Xie ◽  
Chuiyu Lu ◽  
Tianjie Lei ◽  
Yibing Wang ◽  
...  

Extreme hydrologic events are getting more frequent under a changing climate, and a reliable hydrological modeling framework is important to understand their mechanism. However, existing hydrological modeling frameworks are mostly constrained to a relatively coarse resolution, unrealistic input information, and insufficient evaluations, especially for the large domain, and they are, therefore, unable to address and reconstruct many of the water-related issues (e.g., flooding and drought). In this study, a 0.0625-degree (~6 km) resolution variable infiltration capacity (VIC) model developed for China from 1970 to 2016 was extensively evaluated against remote sensing and ground-based observations. A unique feature in this modeling framework is the incorporation of new remotely sensed vegetation and soil parameter dataset. To our knowledge, this constitutes the first application of VIC with such a long-term and fine resolution over a large domain, and more importantly, with a holistic system-evaluation leveraging the best available earth data. The evaluations using in-situ observations of streamflow, evapotranspiration (ET), and soil moisture (SM) indicate a great improvement. The simulations are also consistent with satellite remote sensing products of ET and SM, because the mean differences between the VIC ET and the remote sensing ET range from −2 to 2 mm/day, and the differences for SM of the top thin layer range from −2 to 3 mm. Therefore, this continental-scale hydrological modeling framework is reliable and accurate, which can be used for various applications including extreme hydrological event detections.


Author(s):  
Toby Fore ◽  
Stefan Klein ◽  
Chris Yoxall ◽  
Stan Cone

Managing the threat of Stress Corrosion Cracking (SCC) in natural gas pipelines continues to be an area of focus for many operating companies with potentially susceptible pipelines. This paper describes the validation process of the high-resolution Electro-Magnetic Acoustical Transducer (EMAT) In-Line Inspection (ILI) technology for detection of SCC prior to scheduled pressure tests of inspected line pipe valve sections. The validation of the EMAT technology covered the application of high-resolution EMAT ILI and determining the Probability Of Detection (POD) and Identification (POI). The ILI verification process is in accordance to a API 1163 Level 3 validation. It is described in detail for 30″ and 36″ pipeline segments. Both segments are known to have an SCC history. Correlation of EMAT ILI calls to manual non-destructive measurements and destructively tested SCC samples lead to a comprehensive understanding of the capabilities of the EMAT technology and the associated process for managing the SCC threat. Based on the data gathered, the dimensional tool tolerances in terms of length and depth are derived.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2021 ◽  
Author(s):  
Nithin G R ◽  
Nitish Kumar M ◽  
Venkateswaran Narasimhan ◽  
Rajanikanth Kakani ◽  
Ujjwal Gupta ◽  
...  

Pansharpening is the task of creating a High-Resolution Multi-Spectral Image (HRMS) by extracting and infusing pixel details from the High-Resolution Panchromatic Image into the Low-Resolution Multi-Spectral (LRMS). With the boom in the amount of satellite image data, researchers have replaced traditional approaches with deep learning models. However, existing deep learning models are not built to capture intricate pixel-level relationships. Motivated by the recent success of self-attention mechanisms in computer vision tasks, we propose Pansformers, a transformer-based self-attention architecture, that computes band-wise attention. A further improvement is proposed in the attention network by introducing a Multi-Patch Attention mechanism, which operates on non-overlapping, local patches of the image. Our model is successful in infusing relevant local details from the Panchromatic image while preserving the spectral integrity of the MS image. We show that our Pansformer model significantly improves the performance metrics and the output image quality on imagery from two satellite distributions IKONOS and LANDSAT-8.


2016 ◽  
Author(s):  
Dhanyalekshmi Pillai ◽  
Michael Buchwitz ◽  
Christoph Gerbig ◽  
Thomas Koch ◽  
Maximilian Reuter ◽  
...  

Abstract. Currently 52 % of the world's population resides in urban areas and as a consequence, approximately 70 % of fossil fuel emissions of CO2 arise from cities. This fact in combination with large uncertainties associated with quantifying urban emissions due to lack of appropriate measurements makes it crucial to obtain new measurements useful to identify and quantify urban emissions. This is required, for example, for the assessment of emission mitigation strategies and their effectiveness. Here we investigate the potential of a satellite mission like Carbon Monitoring Satellite (CarbonSat), proposed to the European Space Agency (ESA) – to retrieve the city emissions globally, taking into account a realistic description of the expected retrieval errors, the spatiotemporal distribution of CO2 fluxes, and atmospheric transport. To achieve this we use (i) a high-resolution modeling framework consisting of the Weather Research Forecasting model with a greenhouse gas module (WRF-GHG), which is used to simulate the atmospheric observations of column averaged CO2 dry air mole fractions (XCO2), and (ii) a Bayesian inversion method to derive anthropogenic CO2 emissions and their errors from the CarbonSat XCO2 observations. We focus our analysis on Berlin in Germany using CarbonSat's cloud-free overpasses for one reference year. The dense (wide swath) CarbonSat simulated observations with high-spatial resolution (approx. 2 km × 2 km) permits one to map the city CO2 emission plume with a peak enhancement of typically 0.8–1.35 ppm relative to the background. By performing a Bayesian inversion, it is shown that the random error (RE) of the retrieved Berlin CO2 emission for a single overpass is typically less than 8 to 10 MtCO2 yr−1 (about 15 to 20 % of the total city emission). The range of systematic errors (SE) of the retrieved fluxes due to various sources of error (measurement, modeling, and inventories) is also quantified. Depending on the assumptions made, the SE is less than about 6 to 10 MtCO2 yr−1 for most cases. We find that in particular systematic modeling-related errors can be quite high during the summer months due to substantial XCO2 variations caused by biogenic CO2 fluxes at and around the target region. When making the extreme worst-case assumption that biospheric XCO2 variations cannot be modeled at all (which is overly pessimistic), the SE of the retrieved emission is found to be larger than 10 MtCO2 yr−1 for about half of the sufficiently cloud-free overpasses, and for some of the overpasses we found that SE may even be on the order of magnitude of the anthropogenic emission. This indicates that biogenic XCO2 variations cannot be neglected but must be considered during forward and/or inverse modeling. Overall, we conclude that CarbonSat is well suited to obtain city-scale CO2 emissions as needed to enhance our current understanding of anthropogenic carbon fluxes and that CarbonSat or CarbonSat-like satellites should be an important component of a future global carbon emission monitoring system.


2009 ◽  
Vol 6 (5) ◽  
pp. 807-817 ◽  
Author(s):  
R. Ahmadov ◽  
C. Gerbig ◽  
R. Kretschmer ◽  
S. Körner ◽  
C. Rödenbeck ◽  
...  

Abstract. In order to better understand the effects that mesoscale transport has on atmospheric CO2 distributions, we have used the atmospheric WRF model coupled to the diagnostic biospheric model VPRM, which provides high resolution biospheric CO2 fluxes based on MODIS satellite indices. We have run WRF-VPRM for the period from 16 May to 15 June in 2005 covering the intensive period of the CERES experiment, using the CO2 fields from the global model LMDZ for initialization and lateral boundary conditions. The comparison of modeled CO2 concentration time series against observations at the Biscarosse tower and against output from two global models – LMDZ and TM3 – clearly reveals that WRF-VPRM can capture the measured CO2 signal much better than the global models with lower resolution. Also the diurnal variability of the atmospheric CO2 field caused by recirculation of nighttime respired CO2 is simulated by WRF-VRPM reasonably well. Analysis of the nighttime data indicates that with high resolution modeling tools such as WRF-VPRM a large fraction of the time periods that are impossible to utilize in global models, can be used quantitatively and may help to constrain respiratory fluxes. The paper concludes that we need to utilize a high-resolution model such as WRF-VPRM to use continental observations of CO2 concentration data with more spatial and temporal coverage and to link them to the global inversion models.


Water ◽  
2020 ◽  
Vol 12 (6) ◽  
pp. 1717 ◽  
Author(s):  
Antonio Annis ◽  
Fernando Nardi ◽  
Andrea Petroselli ◽  
Ciro Apollonio ◽  
Ettore Arcangeletti ◽  
...  

Devastating floods are observed every year globally from upstream mountainous to coastal regions. Increasing flood frequency and impacts affect both major rivers and their tributaries. Nonetheless, at the small-scale, the lack of distributed topographic and hydrologic data determines tributaries to be often missing in inundation modeling and mapping studies. Advances in Unmanned Aerial Vehicle (UAV) technologies and Digital Elevation Models (DEM)-based hydrologic modeling can address this crucial knowledge gap. UAVs provide very high resolution and accurate DEMs with low surveying cost and time, as compared to DEMs obtained by Light Detection and Ranging (LiDAR), satellite, or GPS field campaigns. In this work, we selected a LiDAR DEM as a benchmark for comparing the performances of a UAV and a nation-scale high-resolution DEM (TINITALY) in representing floodplain topography for flood simulations. The different DEMs were processed to provide inputs to a hydrologic-hydraulic modeling chain, including the DEM-based EBA4SUB (Event-Based Approach for Small and Ungauged Basins) hydrologic modeling framework for design hydrograph estimation in ungauged basins; the 2D hydraulic model FLO-2D for flood wave routing and hazard mapping. The results of this research provided quantitative analyses, demonstrating the consistent performances of the UAV-derived DEM in supporting affordable distributed flood extension and depth simulations.


2018 ◽  
Author(s):  
Vladimir V. Kalmykov ◽  
Rashit A. Ibrayev ◽  
Maxim N. Kaurkin ◽  
Konstantin V. Ushakov

Abstract. We present new version of the Compact Modeling Framework (CMF3.0) developed for providing the software environment for stand-alone and coupled models of the Global geophysical fluids. The CMF3.0 designed for implementation high and ultra-high resolution models at massive-parallel supercomputers. The key features of the previous CMF version (2.0) are mentioned for reflecting progress in our researches. In the CMF3.0 pure MPI approach with high-level abstract driver, optimized coupler interpolation and I/O algorithms is replaced with PGAS paradigm communications scheme, while central hub architecture evolves to the set of simultaneously working services. Performance tests for both versions are carried out. As addition a parallel realisation of the EnOI (Ensemble Optimal Interpolation) data assimilation method as program service of CMF3.0 is presented.


2020 ◽  
Vol 13 (4) ◽  
pp. 1975-1998 ◽  
Author(s):  
Mariko Oue ◽  
Aleksandra Tatarevic ◽  
Pavlos Kollias ◽  
Dié Wang ◽  
Kwangmin Yu ◽  
...  

Abstract. Ground-based observatories use multisensor observations to characterize cloud and precipitation properties. One of the challenges is how to design strategies to best use these observations to understand these properties and evaluate weather and climate models. This paper introduces the Cloud-resolving model Radar SIMulator (CR-SIM), which uses output from high-resolution cloud-resolving models (CRMs) to emulate multiwavelength, zenith-pointing, and scanning radar observables and multisensor (radar and lidar) products. CR-SIM allows for direct comparison between an atmospheric model simulation and remote-sensing products using a forward-modeling framework consistent with the microphysical assumptions used in the atmospheric model. CR-SIM has the flexibility to easily incorporate additional microphysical modules, such as microphysical schemes and scattering calculations, and expand the applications to simulate multisensor retrieval products. In this paper, we present several applications of CR-SIM for evaluating the representativeness of cloud microphysics and dynamics in a CRM, quantifying uncertainties in radar–lidar integrated cloud products and multi-Doppler wind retrievals, and optimizing radar sampling strategy using observing system simulation experiments. These applications demonstrate CR-SIM as a virtual observatory operator on high-resolution model output for a consistent comparison between model results and observations to aid interpretation of the differences and improve understanding of the representativeness errors due to the sampling limitations of the ground-based measurements. CR-SIM is licensed under the GNU GPL package and both the software and the user guide are publicly available to the scientific community.


2020 ◽  
Author(s):  
Sudershan Gangrade ◽  
Mario Morales-Hernandez ◽  
Ahmad A. Tavakoly ◽  
Kristi R. Arsenault ◽  
Jerry Wegiel ◽  
...  

<p><span>This work provides an envisioned overview of scientific collaboration among multiple United States agencies including the National Aeronautics and Space Administration (NASA), U.S. Army Engineer Research and Development Center (ERDC), Oak Ridge National Laboratory (ORNL), and National Geospatial-Intelligence Agency (NGA) for the integration of existing data and model capabilities to support global scale water security applications. The primary objective is to develop a high-resolution, operational streamflow and flood forecasting system at the global scale, leveraging multiple process-based models, remote sensing data assimilation, and high-performance computing techniques. We present a preliminary case study that demonstrates the integration of the modeling framework using NASA’s Land Information System (LIS), ERDC’s Streamflow Prediction Tool (SPT), and ORNL’s GPU-accelerated 2D flood model (TRITON). Using the high-resolution terrain data from NGA, a historic flood event that occurred in March 2019 at Offutt Air Force Base in Nebraska, USA, was simulated on ORNL’s supercomputer, </span><span><em>Summit</em></span><span>. This benchmark test case is used to validate the modeling framework and to help establish a roadmap for the expanded modeling efforts at the global scale. In a broader sense, the proposed infrastructure will enable decision-makers to address issues such as transboundary water conflicts, flood and drought monitoring, and sustainable water resources management and to study their impacts on human, water-energy and natural systems in the short, medium and long term.</span></p>


Sign in / Sign up

Export Citation Format

Share Document