Measuring Flood Discharge

Author(s):  
Marian Muste ◽  
Ton Hoitink

With a continuous global increase in flood frequency and intensity, there is an immediate need for new science-based solutions for flood mitigation, resilience, and adaptation that can be quickly deployed in any flood-prone area. An integral part of these solutions is the availability of river discharge measurements delivered in real time with high spatiotemporal density and over large-scale areas. Stream stages and the associated discharges are the most perceivable variables of the water cycle and the ones that eventually determine the levels of hazard during floods. Consequently, the availability of discharge records (a.k.a. streamflows) is paramount for flood-risk management because they provide actionable information for organizing the activities before, during, and after floods, and they supply the data for planning and designing floodplain infrastructure. Moreover, the discharge records represent the ground-truth data for developing and continuously improving the accuracy of the hydrologic models used for forecasting streamflows. Acquiring discharge data for streams is critically important not only for flood forecasting and monitoring but also for many other practical uses, such as monitoring water abstractions for supporting decisions in various socioeconomic activities (from agriculture to industry, transportation, and recreation) and for ensuring healthy ecological flows. All these activities require knowledge of past, current, and future flows in rivers and streams. Given its importance, an ability to measure the flow in channels has preoccupied water users for millennia. Starting with the simplest volumetric methods to estimate flows, the measurement of discharge has evolved through continued innovation to sophisticated methods so that today we can continuously acquire and communicate the data in real time. There is no essential difference between the instruments and methods used to acquire streamflow data during normal conditions versus during floods. The measurements during floods are, however, complex, hazardous, and of limited accuracy compared with those acquired during normal flows. The essential differences in the configuration and operation of the instruments and methods for discharge estimation stem from the type of measurements they acquire—that is, discrete and autonomous measurements (i.e., measurements that can be taken any time any place) and those acquired continuously (i.e., estimates based on indirect methods developed for fixed locations). Regardless of the measurement situation and approach, the main concern of the data providers for flooding (as well as for other areas of water resource management) is the timely delivery of accurate discharge data at flood-prone locations across river basins.

2018 ◽  
Vol 22 (12) ◽  
pp. 6435-6448 ◽  
Author(s):  
Jiawei Hou ◽  
Albert I. J. M. van Dijk ◽  
Luigi J. Renzullo ◽  
Robert A. Vertessy

Abstract. River discharge measurements have proven invaluable to monitor the global water cycle, assess flood risk, and guide water resource management. However, there is a delay, and ongoing decline, in the availability of gauging data and stations are highly unevenly distributed globally. While not a substitute for river discharge measurement, remote sensing is a cost-effective technology to acquire information on river dynamics in situations where ground-based measurements are unavailable. The general approach has been to relate satellite observation to discharge measured in situ, which prevents its use for ungauged rivers. Alternatively, hydrological models are now available that can be used to estimate river discharge globally. While subject to greater errors and biases than measurements, model estimates of river discharge do expand the options for applying satellite-based discharge monitoring in ungauged rivers. Our aim was to test whether satellite gauging reaches (SGRs), similar to virtual stations in satellite altimetry, can be constructed based on Moderate Resolution Imaging Spectroradiometer (MODIS) optical or Global Flood Detection System (GFDS) passive microwave-derived surface water extent fraction and simulated discharge from the World-Wide Water (W3) model version 2. We designed and tested two methods to develop SGRs across the Amazon Basin and found that the optimal grid cell selection method performed best for relating MODIS and GFDS water extent to simulated discharge. The number of potential river reaches to develop SGRs increases from upstream to downstream reaches as rivers widen. MODIS SGRs are feasible for more river reaches than GFDS SGRs due to its higher spatial resolution. However, where they could be constructed, GFDS SGRs predicted discharge more accurately as observations were less affected by cloud and vegetation. We conclude that SGRs are suitable for automated large-scale application and offer a possibility to predict river discharge variations from satellite observations alone, for both gauged and ungauged rivers.


Author(s):  
Suppawong Tuarob ◽  
Conrad S. Tucker

The acquisition and mining of product feature data from online sources such as customer review websites and large scale social media networks is an emerging area of research. In many existing design methodologies that acquire product feature preferences form online sources, the underlying assumption is that product features expressed by customers are explicitly stated and readily observable to be mined using product feature extraction tools. In many scenarios however, product feature preferences expressed by customers are implicit in nature and do not directly map to engineering design targets. For example, a customer may implicitly state “wow I have to squint to read this on the screen”, when the explicit product feature may be a larger screen. The authors of this work propose an inference model that automatically assigns the most probable explicit product feature desired by a customer, given an implicit preference expressed. The algorithm iteratively refines its inference model by presenting a hypothesis and using ground truth data, determining its statistical validity. A case study involving smartphone product features expressed through Twitter networks is presented to demonstrate the effectiveness of the proposed methodology.


Author(s):  
Zhongxiang Wang ◽  
Masoud Hamedi ◽  
Stanley Young

Crowdsourced GPS probe data, such as travel time on changeable-message signs and incident detection, have been gaining popularity in recent years as a source for real-time traffic information to driver operations and transportation systems management and operations. Efforts have been made to evaluate the quality of such data from different perspectives. Although such crowdsourced data are already in widespread use in many states, particularly the high traffic areas on the Eastern seaboard, concerns about latency—the time between traffic being perturbed as a result of an incident and reflection of the disturbance in the outsourced data feed—have escalated in importance. Latency is critical for the accuracy of real-time operations, emergency response, and traveler information systems. This paper offers a methodology for measuring probe data latency regarding a selected reference source. Although Bluetooth reidentification data are used as the reference source, the methodology can be applied to any other ground truth data source of choice. The core of the methodology is an algorithm for maximum pattern matching that works with three fitness objectives. To test the methodology, sample field reference data were collected on multiple freeway segments for a 2-week period by using portable Bluetooth sensors as ground truth. Equivalent GPS probe data were obtained from a private vendor, and their latency was evaluated. Latency at different times of the day, impact of road segmentation scheme on latency, and sensitivity of the latency to both speed-slowdown and recovery-from-slowdown episodes are also discussed.


2021 ◽  
Author(s):  
Francesco Silvestro ◽  
Giulia Ercolani ◽  
Simone Gabellani ◽  
Pietro Giordano ◽  
Marco Falzacappa

Abstract Reducing errors in streamflow simulations is one of the main issues for a reliable forecast system aimed to manage floods and water resources. Data assimilation is a powerful tool to reduce model errors. Unfortunately, its use in operational chains with distributed and physically based models is a challenging issue since many methodologies require computational times that are hardly compatible with operational needs. The implemented methodology corrects modelled water level in channels and root-zone soil moisture using real-time water level gauge stations. Model's variables are corrected locally, then the updates are propagated upstream with a simple approach that accounts for sub-basins’ contributions. The overfitting issue, which arises when updating a spatially distributed model with sparse streamflow data, is hence here addressed in the context of a large-scale operational implementation working in real time thanks to the simplicity of the strategy. To test the method, a hindcast of daily simulations covering 18 months was performed on the Italian Tevere basin, and the modelling results with and without assimilation were compared. The setup was that currently in place in the operational framework in both cases. The analysis evidences a clear overall benefit of applying the proposed method even out of the assimilation time window.


2017 ◽  
Author(s):  
Norel Rimbu ◽  
Monica Ionita ◽  
Markus Czymzik ◽  
Achim Brauer ◽  
Gerrit Lohmann

Abstract. We investigate the relationship between the variability in the frequency of River Ammer floods (southern Germany) and temperature/precipitation extremes over Europe using observational River Ammer discharge data back to 1926 and the 5500-year-long flood layer record from varved Lake Ammersee sediments. We show that observed River Ammer flood frequency variability is not only related with local extreme precipitation, but also with large-scale temperature extreme anomalies. Less (more) extreme high temperatures over central and western (northeastern) Europe are recorded during periods of increased River Ammer flood frequency. We argue that changing radiative forcing due to cloudiness anomaly patterns associated with River Ammer floods induce these extreme temperature anomalies. Consistent patterns are obtained using observed discharge and proxy flood layer frequency data. Furthermore, a higher frequency of observed River Ammer floods and flood layers is associated with enhanced blocking activity over northeastern Europe. A blocking high over this region increases the probability of wave breaking and associated heavy precipitation over western Europe. A similar blocking pattern is associated with periods of reduced solar activity. Consequently, solar modulated changes in blocking frequency over northeastern Europe could explain the connection between River Ammer floods and solar activity, as also identified in previous studies. We argue that multi-decadal to millennial flood frequency variations in the Mid- to Late Holocene flood layer record from Lake Ammersee characterizes also the extreme temperatures in northeastern Europe.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ranjit Mahato ◽  
Gibji Nimasow ◽  
Oyi Dai Nimasow ◽  
Dhoni Bushi

AbstractSonitpur and Udalguri district of Assam possess rich tropical forests with equally important faunal species. The Nameri National Park, Sonai-Rupai Wildlife Sanctuary, and other Reserved Forests are areas of attraction for tourists and wildlife lovers. However, these protected areas are reportedly facing the problem of encroachment and large-scale deforestation. Therefore, this study attempts to estimate the forest cover change in the area through integrating the remotely sensed data of 1990, 2000, 2010, and 2020 with the Geographic Information System. The Maximum Likelihood algorithm-based supervised classification shows acceptable agreement between the classified image and the ground truth data with an overall accuracy of about 96% and a Kappa coefficient of 0.95. The results reveal a forest cover loss of 7.47% from 1990 to 2000 and 7.11% from 2000 to 2010. However, there was a slight gain of 2.34% in forest cover from 2010 to 2020. The net change of forest to non-forest was 195.17 km2 in the last forty years. The forest transition map shows a declining trend of forest remained forest till 2010 and a slight increase after that. There was a considerable decline in the forest to non-forest (11.94% to 3.50%) from 2000–2010 to 2010–2020. Further, a perceptible gain was also observed in the non-forest to the forest during the last four decades. The overlay analysis of forest cover maps show an area of 460.76 km2 (28.89%) as forest (unchanged), 764.21 km2 (47.91%) as non-forest (unchanged), 282.67 km2 (17.72%) as deforestation and 87.50 km2 (5.48%) as afforestation. The study found hotspots of deforestation in the closest areas of National Park, Wildlife Sanctuary, and Reserved Forests due to encroachments for human habitation, agriculture, and timber/fuelwood extractions. Therefore, the study suggests an early declaration of these protected areas as Eco-Sensitive Zone to control the increasing trends of deforestation.


Author(s):  
Zhihan Fang ◽  
Yu Yang ◽  
Guang Yang ◽  
Yikuan Xian ◽  
Fan Zhang ◽  
...  

Data from the cellular network have been proved as one of the most promising way to understand large-scale human mobility for various ubiquitous computing applications due to the high penetration of cellphones and low collection cost. Existing mobility models driven by cellular network data suffer from sparse spatial-temporal observations because user locations are recorded with cellphone activities, e.g., calls, text, or internet access. In this paper, we design a human mobility recovery system called CellSense to take the sparse cellular billing data (CBR) as input and outputs dense continuous records to recover the sensing gap when using cellular networks as sensing systems to sense the human mobility. There is limited work on this kind of recovery systems at large scale because even though it is straightforward to design a recovery system based on regression models, it is very challenging to evaluate these models at large scale due to the lack of the ground truth data. In this paper, we explore a new opportunity based on the upgrade of cellular infrastructures to obtain cellular network signaling data as the ground truth data, which log the interaction between cellphones and cellular towers at signal levels (e.g., attaching, detaching, paging) even without billable activities. Based on the signaling data, we design a system CellSense for human mobility recovery by integrating collective mobility patterns with individual mobility modeling, which achieves the 35.3% improvement over the state-of-the-art models. The key application of our recovery model is to take regular sparse CBR data that a researcher already has, and to recover the missing data due to sensing gaps of CBR data to produce a dense cellular data for them to train a machine learning model for their use cases, e.g., next location prediction.


2018 ◽  
Vol 10 (11) ◽  
pp. 1726 ◽  
Author(s):  
Lillian Petersen

Developing countries often have poor monitoring and reporting of weather and crop health, leading to slow responses to droughts and food shortages. Here, I develop satellite analysis methods and software tools to predict crop yields two to four months before the harvest. This method measures relative vegetation health based on pixel-level monthly anomalies of NDVI, EVI and NDWI indices. Because no crop mask, tuning, or subnational ground truth data are required, this method can be applied to any location, crop, or climate, making it ideal for African countries with small fields and poor ground observations. Testing began in Illinois where there is reliable county-level crop data. Correlations were computed between corn, soybean, and sorghum yields and monthly vegetation health anomalies for every county and year. A multivariate regression using every index and month (up to 1600 values) produced a correlation of 0.86 with corn, 0.74 for soybeans, and 0.65 for sorghum, all with p-values less than 10 − 6 . The high correlations in Illinois show that this model has good forecasting skill for crop yields. Next, the method was applied to every country in Africa for each country’s main crops. Crop production was then predicted for the 2018 harvest and compared to actual production values. Twenty percent of the predictions had less than 2% error, and 40% had less than 5% error. This method is unique because of its simplicity and versatility: it shows that a single user on a laptop computer can produce reasonable real-time estimates of crop yields across an entire continent.


2013 ◽  
Vol 30 (10) ◽  
pp. 2452-2464 ◽  
Author(s):  
J. H. Middleton ◽  
C. G. Cooke ◽  
E. T. Kearney ◽  
P. J. Mumford ◽  
M. A. Mole ◽  
...  

Abstract Airborne scanning laser technology provides an effective method to systematically survey surface topography and changes in that topography with time. In this paper, the authors describe the capability of a rapid-response lidar system in which airborne observations are utilized to describe results from a set of surveys of Narrabeen–Collaroy Beach, Sydney, New South Wales, Australia, over a short period of time during which significant erosion and deposition of the subaerial beach occurred. The airborne lidar data were obtained using a Riegl Q240i lidar coupled with a NovAtel SPAN-CPT integrated Global Navigation Satellite System (GNSS) and inertial unit and flown at various altitudes. A set of the airborne lidar data is compared with ground-truth data acquired from the beach using a GNSS/real-time kinematic (RTK) system mounted on an all-terrain vehicle. The comparison shows consistency between systems, with the airborne lidar data being less than 0.02 m different from the ground-truth data when four surveys are undertaken, provided a method of removing outliers—developed here and designated as “weaving”—is used. The combination of airborne lidar data with ground-truth data provides an excellent method of obtaining high-quality topographic data. Using the results from this analysis, it is shown that airborne lidar data alone produce results that can be used for ongoing large-scale surveys of beaches with reliable accuracy, and that the enhanced accuracy resulting from multiple airborne surveys can be assessed quantitatively.


Sign in / Sign up

Export Citation Format

Share Document