Application of Low to Medium Resolution Data for Hydrological Modeling in Malawi

Author(s):  
Natalia Dambe ◽  
Julian Smit
2021 ◽  
Vol 13 (19) ◽  
pp. 3870
Author(s):  
Hilma S. Nghiyalwa ◽  
Marcel Urban ◽  
Jussi Baade ◽  
Izak P. J. Smit ◽  
Abel Ramoelo ◽  
...  

Reliable estimates of savanna vegetation constituents (i.e., woody and herbaceous vegetation) are essential as they are both responders and drivers of global change. The savanna is a highly heterogenous biome with high variability in land cover types while also being very dynamic at both temporal and spatial scales. To understand the spatial-temporal dynamics of savannas, using Earth Observation (EO) data for mixed-pixel analysis is crucial. Mixed pixel analysis provides detailed land cover data at a sub-pixel level which are essential for conservation purposes, understanding food supply for herbivores, quantifying environmental change, such as bush encroachment, and fuel availability essential for understanding fire dynamics, and for accurate estimation of savanna biomass. This review paper consulted 197 studies employing mixed-pixel analysis in savanna ecosystems. The review indicates that studies have so far attempted to resolve the savanna mixed-pixel issues by using mainly coarse resolution data, such as Terra-Aqua MODIS and AVHRR and medium resolution Landsat, to provide fractional cover data. Hence, there is a lack of spatio-temporal mixed-pixel analysis for savannas at high spatial resolutions. Methods used for mixed-pixel analysis include parametric and non-parametric methods which range from pixel-unmixing models, such as linear spectral mixture analysis (SMA), time series decomposition, empirical methods to link the green vegetation parameters with Vegetation Indices (VIs), and machine learning methods, such as regression trees (RT) and random forests (RF). Most studies were undertaken at local and regional scale, highlighting a research gap for savanna mixed pixel studies at national, continental, and global level. Parametric methods for modeling spatio-temporal mixed pixel analysis were preferred for coarse to medium resolution remote sensing data, while non-parametric methods were preferred for very high to high spatial resolution data. The review indicates a gap for long time series spatio-temporal mixed-pixel analysis of savannas using high resolution data at various scales. There is potential to harmonize the available low resolution EO data with new high-resolution sensors to provide long time series of the savanna mixed pixel, which, according to this review, is missing.


2020 ◽  
Author(s):  
Etienne Foulon ◽  
Alain N. Rousseau ◽  
Eduardo J. Scarpari Spolidorio ◽  
Kian Abbasnezhadi

<p>High-resolution data are readily available and used more than ever in hydrological modeling, despite few investigations demonstrating the added value. Nonetheless, a few studies have looked into the benefits of using increased spatial resolution data with the widely-used, semi-distributed, SWAT model. Meanwhile, far too little attention has been paid to the physically-based, semi-distributed, hydrological model HYDROTEL which is widely used for hydrological forecasting and hydroclimatic studies in Quebec, Canada. In a preliminary study, we demonstrated that increasing the spatial resolution of the digital elevation model (DEM) had a significant impact on the discretization of a watershed into hillslopes (i.e., computational units of HYDROTEL), and on their topographic attributes (slope, elevation and area). Accordingly, values of the calibration parameters were also substantially affected; whereas model performance was slightly improved for high- and low-flows only. This is why, we hereby propose the systematic assessment of HYDROTEL with respect to the resolution of the spatiotemporal computational domain for a specific physiographic scale. This investigation was conducted for the 350-km<sup>2</sup> St. Charles River watershed, Quebec, Canada. The DEM used was derived from LiDAR data and aggregated at 20 m. Due to a lack of accurate precipitation information at time scales less than 24 hr, data from the high resolution deterministic precipitation analysis system, CaPA-HRDPA, were used to generate various time steps (6, 8, 12, and 24 hr) and to control results obtained from observed data. This approach, recently applied to three watersheds in Yukon, proved to be an excellent alternative to calibrate a hydrological model in a region known as a hydometeorological desert (see EGU 2020 presentation of Abbasnezhadi and Rousseau). The number of computational units ranged between 5 to 684 hillslopes, with mean areas ranging from 75 km<sup>2</sup> to 0.5 km<sup>2</sup>. HYDROTEL was automatically calibrated over the 2013-2018 period using PADDS. We combined the Kling Gupta Efficiency and the log-transformed Nash Sutcliffe Efficiency to ensure good seasonal and annual representations of the hydrographs. The 12 most sensitive calibration parameters were adjusted using 150 optimisation trials with 150 repetitions each. Behavioral parameters were used to assess uncertainty and ensuing equifinality. All scenarios were evaluated using flow duration curves, performance indicators (RMSE, % Bias) and hydrograph analyses. In addition, quantitative analyses were done with respect to physiographic features such as: length of river segments, hillslopes, and sub-watershed boundaries for each resolution. We believe this study provides the needed systematic framework to assess trade-offs between spatiotemporal resolutions and modeling performances that can be achieved with HYDROTEL. Moreover, the use of various numbers of CaPA-HRDPA stations for model calibration has allowed us to determine the number of precipitation stations needed to achieve a given performance threshold.</p>


Author(s):  
Teerapong Panboonyuen ◽  
Kulsawasd Jitkajornwanich ◽  
Siam Lawawirojwong ◽  
Panu Srestasathiern ◽  
Peerapon Vateekul

In remote sensing domain, it is crucial to automatically annotate semantics, e.g., river, building, forest, etc, on the raster images. Deep Convolutional Encoder Decoder (DCED) network is the state-of-the-art semantic segmentation for remotely-sensed images. However, the accuracy is still limited, since the network is not designed for remotely sensed images and the training data in this domain is deficient. In this paper, we aim to propose a novel CNN network for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose to apply a recent CNN network call ''Global Convolutional Network (GCN)'', since it can capture different resolutions by extracting multi-scale features from different stages of the network. Also, we further enhance the network by improving its backbone using larger numbers of layers, which is suitable for medium resolution remotely sensed images. Second, ''Channel Attention'' is presented into our network in order to select most discriminative filters (features). Third, ''Domain Specific Transfer Learning'' is introduced to alleviate the scarcity issue by utilizing other remotely sensed corpora with different resolutions as pre-trained data. The experiment was then conducted on two given data sets: ($i$) medium resolution data collected from Landsat-8 satellite and ($ii$) very high resolution data called ''ISPRS Vaihingen Challenge Data Set''. The results show that our networks outperformed DCED in terms of $F1$ for 17.48% and 2.49% on medium and very high resolution corpora, respectively.


Author(s):  
Nemanja Dobrota ◽  
Aleksandar Stevanovic ◽  
Nikola Mitrovic

Current signal retiming policies are deficient in recognizing the potential of emerging traffic datasets and simulation tools to improve signal timings. Consequently, current practice advocates the use of periodically collected (low-resolution) traffic datasets and deterministic (low-fidelity) simulation tools. When deployed in the field, such signal timings require excessive fine-tuning. The most recent trends promote the use of high-resolution data collected at 10 Hz frequency. While such an approach shows promise, the process heavily relies on specific data sets that are neither widely available nor clearly integrated into the existing signal retiming practices and procedures. Interestingly, data collected in an ongoing fashion and aggregated in several-minute bins (referred to here as medium-resolution) have not received much attention in the traditional retiming procedures. This study examines traditional signal retiming practices to provide a contextual framework for the other retiming alternatives. The authors define and classify different resolutions of various traffic data used in the signal retiming process and propose a signal retiming procedure based on widely available medium-resolution data and high-fidelity simulation modeling. The authors apply the traditional (low-resolution and low-fidelity) and a proposed (medium-resolution and high-fidelity) approach to a 28-intersection corridor in southeastern Florida. Signal timing plans developed from the proposed approach outperformed current plans from field and those plans developed in the traditional approach by reducing the average delay anywhere between 6.5 and 26%. With regard to the number of stops, changes for the traditional and proposed approaches were of much lower significance when compared with the field signal timings. The traveling speeds have been increased by 4.1%–18% by the proposed signal timings and delay was not transferred onto the neighboring streets, as was the case for plans developed by the traditional approach. Development, calibration, and validation of models within the proposed approach are more time-consuming and challenging than the modeling needs of the traditional approach. One direction of future research should address the automation of calibration and validation procedures. The other direction for future research should be related to the field evaluation of proposed signal timing plans.


Author(s):  
Tim G. J. Rudner ◽  
Marc Rußwurm ◽  
Jakub Fil ◽  
Ramona Pelich ◽  
Benjamin Bischke ◽  
...  

We propose a novel approach for rapid segmentation of flooded buildings by fusing multiresolution, multisensor, and multitemporal satellite imagery in a convolutional neural network. Our model significantly expedites the generation of satellite imagery-based flood maps, crucial for first responders and local authorities in the early stages of flood events. By incorporating multitemporal satellite imagery, our model allows for rapid and accurate post-disaster damage assessment and can be used by governments to better coordinate medium- and long-term financial assistance programs for affected areas. The network consists of multiple streams of encoder-decoder architectures that extract spatiotemporal information from medium-resolution images and spatial information from high-resolution images before fusing the resulting representations into a single medium-resolution segmentation map of flooded buildings. We compare our model to state-of-the-art methods for building footprint segmentation as well as to alternative fusion approaches for the segmentation of flooded buildings and find that our model performs best on both tasks. We also demonstrate that our model produces highly accurate segmentation maps of flooded buildings using only publicly available medium-resolution data instead of significantly more detailed but sparsely available very high-resolution data. We release the first open-source dataset of fully preprocessed and labeled multiresolution, multispectral, and multitemporal satellite images of disaster sites along with our source code.


CATENA ◽  
2008 ◽  
Vol 75 (1) ◽  
pp. 93-101 ◽  
Author(s):  
Thomas Schmid ◽  
Magaly Koch ◽  
Michael DiBlasi ◽  
Miruts Hagos

2016 ◽  
Vol 44 (4) ◽  
pp. 657-664 ◽  
Author(s):  
R. H. Rizvi ◽  
Ram Newaj ◽  
P. S. Karmakar ◽  
A. Saxena ◽  
S. K. Dhyani

2014 ◽  
Vol 5 (6) ◽  
pp. 539-547 ◽  
Author(s):  
Jacqueline Long ◽  
Chuanmin Hu ◽  
Lisa Robbins

Author(s):  
Teerapong Panboonyuen ◽  
Kulsawasd Jitkajornwanich ◽  
Siam Lawawirojwong ◽  
Panu Srestasathiern ◽  
Peerapon Vateekul

In remote sensing domain, it is crucial to annotate semantics, e.g., river, building, forest, etc, on the raster images. Deep Convolutional Encoder Decoder (DCED) network is the state-of-the-art semantic segmentation for remotely-sensed images. However, the accuracy is still limited, since the network is not designed for remotely sensed images and the training data in this domain is deficient. In this paper, we aim to propose a novel CNN for semantic segmentation particularly for remote sensing corpora with three main contributions. First, we propose to apply a recent CNN call ``Global Convolutional Network (GCN)'', since it can capture different resolutions by extracting multi-scale features from different stages of the network. Also, we further enhance the network by improving its backbone using larger numbers of layers, which is suitable for medium resolution remotely sensed images. Second, ``Channel Attention'' is presented into our network in order to select most discriminative filters (features). Third, ``Domain Specific Transfer Learning'' is introduced to alleviate the scarcity issue by utilizing other remotely sensed corpora with different resolutions as pre-trained data. The experiment was then conducted on two given data sets: ($i$) medium resolution data collected from Landsat-8 satellite and ($ii$) very high resolution data called ``ISPRS Vaihingen Challenge Data Set''. The results show that our networks outperformed DCED in terms of $F1$ for 17.48% and 2.49% on medium and very high resolution corpora, respectively.


Sign in / Sign up

Export Citation Format

Share Document