scholarly journals HESS Opinions: The complementary merits of top-down and bottom-up modelling philosophies in hydrology

Author(s):  
Markus Hrachowitz ◽  
Martyn Clark

Abstract. In hydrology, the two somewhat competing modelling philosophies of bottom-up and top-down approaches are the basis of most process-based models. Differing mostly (1) in their respective degree of detail in resolving the modelling domain and (2) in their respective degree of explicitly treating conservation laws, these two philosophies suffer from similar limitations. Nevertheless, a better understanding of their respective basis (i.e. micro-scale vs. macro-scale) as well as their respective short comings bears the potential of identifying the complementary value of the two philosophies for improving our models. In this manuscript we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the two modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy but rather on the way a model is implemented. Based on the premises that top-down models can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual we argue that a convergence of the two modelling strategies may hold some value for progressing the development of hydrological models.

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2021 ◽  
pp. 1-33
Author(s):  
Albert Patterson ◽  
Yong Hoon Lee ◽  
James T. Allison

Abstract Design-for-manufacturing (DFM) concepts have traditionally focused on design simplification; this is highly effective for relatively simple, mass-produced products, but tends to be too restrictive for more complex designs. Effort in recent decades has focused on creating methods for generating and imposing specific, process-derived technical manufacturability constraints for some common problems. This paper presents an overview of the problem and its design implications, a discussion of the nature of the manufacturability constraints, and a survey of the existing approaches and methods for generating/enforcing the minimally-restrictive manufacturability constraints within several design domains. Five major design perspectives or viewpoints were included in the study, including the system design (top-down), product/component design (bottom-up), the manufacturing process-dominant case (product/component design under a specific process), the part-redesign perspective, and sustainability perspective. Manufacturability constraints within four design levels or scales were explored as well, ranging from macro-scale to sub-micro-scale design. Very little previous work was found in many areas, but it is clear from the existing literature that the problem and a general solution to it are very important to explore further in future DFM efforts.


Author(s):  
Albert E. Patterson ◽  
Yong Hoon Lee ◽  
James T. Allison

Abstract Design-for-manufacturing (DFM) concepts have traditionally focused on design simplification; this is highly effective for relatively simple, mass-produced products, but tends to be too restrictive for more complex designs. Effort in recent decades has focused on creating methods for generating and imposing specific, process-derived technical manufacturability constraints for some common problems. This paper presents an overview of the problem and its design implications, a discussion of the nature of the manufacturability constraints, and a survey of the existing approaches and methods for generating/enforcing the minimally restrictive manufacturability constraints within several design domains. Four major design perspectives were included in the study, including the system design (top-down), the product design (bottom-up), the manufacturing process-dominant approach (specific process required), and the part-redesign approach. Manufacturability constraints within four design levels were explored as well, ranging from macro-scale to sub-micro-scale design. Very little previous work was found in many areas but it is clear from the existing literature that the problem and a general solution to it are very important to explore further in future DFM and design automation work.


2009 ◽  
Vol 6 (1) ◽  
pp. 1317-1343 ◽  
Author(s):  
C. Gerbig ◽  
A. J. Dolman ◽  
M. Heimann

Abstract. Estimating carbon exchange at regional scales is paramount to understanding feedbacks between climate and the carbon cycle, but also to verifying climate change mitigation such as emission reductions and strategies compensating for emissions such as carbon sequestration. This paper discusses evidence for a number of important shortcomings of current generation modelling frameworks designed to provide regional scale budgets. Current top-down and bottom-up approaches targeted at deriving consistent regional scale carbon exchange estimates for biospheric and anthropogenic sources and sinks are hampered by a number of issues: We show that top-down constraints using point measurements made from tall towers, although sensitive to larger spatial scales, are however influenced by local areas much stronger than previously thought. On the other hand, classical bottom-up approaches using process information collected at the local scale, such as from eddy covariance data, need up-scaling and validation on larger scales. We therefore argue for a combination of both approaches, implicitly providing the important local scale information for the top-down constraint, and providing the atmospheric constraint for up-scaling of flux measurements. Combining these data streams necessitates quantifying their respective representation errors, which are discussed. The impact of these findings on future network design is highlighted, and some recommendations are given.


2003 ◽  
Vol 33 (3) ◽  
pp. 480-489 ◽  
Author(s):  
Boris Zeide

So far, process-based models use largely the bottom-up approach. They start by describing physiological processes in a single plant element and then integrate the constituent processes to predict growth and dimensions of the tree and stand. Although bottom-up process models are praised for their contribution to knowledge of growth processes, their predictions are poor. The complementary top-down approach begins where the bottom-up model ends: with measurable variables such as height or diameter. This approach intends to uncover the ecophysiological processes responsible for the observed tree dimensions rather than to provide growth information for forest management. As foresters, we would like to utilize measurable variables to uncover inner mechanisms of growth in hope of predicting future diameter, number of trees, volume, and other practical variables. This means that we need to combine the top-down and bottom-up approaches. Examples of the united U-approach (so called because of its descending and ascending branches) are described. They demonstrate that growth models can be both meaningful and accurate.


2017 ◽  
Vol 21 (8) ◽  
pp. 3953-3973 ◽  
Author(s):  
Markus Hrachowitz ◽  
Martyn P. Clark

Abstract. In hydrology, two somewhat competing philosophies form the basis of most process-based models. At one endpoint of this continuum are detailed, high-resolution descriptions of small-scale processes that are numerically integrated to larger scales (e.g. catchments). At the other endpoint of the continuum are spatially lumped representations of the system that express the hydrological response via, in the extreme case, a single linear transfer function. Many other models, developed starting from these two contrasting endpoints, plot along this continuum with different degrees of spatial resolutions and process complexities. A better understanding of the respective basis as well as the respective shortcomings of different modelling philosophies has the potential to improve our models. In this paper we analyse several frequently communicated beliefs and assumptions to identify, discuss and emphasize the functional similarity of the seemingly competing modelling philosophies. We argue that deficiencies in model applications largely do not depend on the modelling philosophy, although some models may be more suitable for specific applications than others and vice versa, but rather on the way a model is implemented. Based on the premises that any model can be implemented at any desired degree of detail and that any type of model remains to some degree conceptual, we argue that a convergence of modelling strategies may hold some value for advancing the development of hydrological models.


2020 ◽  
Vol 12 (3) ◽  
pp. 1561-1623 ◽  
Author(s):  
Marielle Saunois ◽  
Ann R. Stavert ◽  
Ben Poulter ◽  
Philippe Bousquet ◽  
Josep G. Canadell ◽  
...  

Abstract. Understanding and quantifying the global methane (CH4) budget is important for assessing realistic pathways to mitigate climate change. Atmospheric emissions and concentrations of CH4 continue to increase, making CH4 the second most important human-influenced greenhouse gas in terms of climate forcing, after carbon dioxide (CO2). The relative importance of CH4 compared to CO2 depends on its shorter atmospheric lifetime, stronger warming potential, and variations in atmospheric growth rate over the past decade, the causes of which are still debated. Two major challenges in reducing uncertainties in the atmospheric growth rate arise from the variety of geographically overlapping CH4 sources and from the destruction of CH4 by short-lived hydroxyl radicals (OH). To address these challenges, we have established a consortium of multidisciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate new research aimed at improving and regularly updating the global methane budget. Following Saunois et al. (2016), we present here the second version of the living review paper dedicated to the decadal methane budget, integrating results of top-down studies (atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up estimates (including process-based models for estimating land surface emissions and atmospheric chemistry, inventories of anthropogenic emissions, and data-driven extrapolations). For the 2008–2017 decade, global methane emissions are estimated by atmospheric inversions (a top-down approach) to be 576 Tg CH4 yr−1 (range 550–594, corresponding to the minimum and maximum estimates of the model ensemble). Of this total, 359 Tg CH4 yr−1 or ∼ 60 % is attributed to anthropogenic sources, that is emissions caused by direct human activity (i.e. anthropogenic emissions; range 336–376 Tg CH4 yr−1 or 50 %–65 %). The mean annual total emission for the new decade (2008–2017) is 29 Tg CH4 yr−1 larger than our estimate for the previous decade (2000–2009), and 24 Tg CH4 yr−1 larger than the one reported in the previous budget for 2003–2012 (Saunois et al., 2016). Since 2012, global CH4 emissions have been tracking the warmest scenarios assessed by the Intergovernmental Panel on Climate Change. Bottom-up methods suggest almost 30 % larger global emissions (737 Tg CH4 yr−1, range 594–881) than top-down inversion methods. Indeed, bottom-up estimates for natural sources such as natural wetlands, other inland water systems, and geological sources are higher than top-down estimates. The atmospheric constraints on the top-down budget suggest that at least some of these bottom-up emissions are overestimated. The latitudinal distribution of atmospheric observation-based emissions indicates a predominance of tropical emissions (∼ 65 % of the global budget, < 30∘ N) compared to mid-latitudes (∼ 30 %, 30–60∘ N) and high northern latitudes (∼ 4 %, 60–90∘ N). The most important source of uncertainty in the methane budget is attributable to natural emissions, especially those from wetlands and other inland waters. Some of our global source estimates are smaller than those in previously published budgets (Saunois et al., 2016; Kirschke et al., 2013). In particular wetland emissions are about 35 Tg CH4 yr−1 lower due to improved partition wetlands and other inland waters. Emissions from geological sources and wild animals are also found to be smaller by 7 Tg CH4 yr−1 by 8 Tg CH4 yr−1, respectively. However, the overall discrepancy between bottom-up and top-down estimates has been reduced by only 5 % compared to Saunois et al. (2016), due to a higher estimate of emissions from inland waters, highlighting the need for more detailed research on emissions factors. Priorities for improving the methane budget include (i) a global, high-resolution map of water-saturated soils and inundated areas emitting methane based on a robust classification of different types of emitting habitats; (ii) further development of process-based models for inland-water emissions; (iii) intensification of methane observations at local scales (e.g., FLUXNET-CH4 measurements) and urban-scale monitoring to constrain bottom-up land surface models, and at regional scales (surface networks and satellites) to constrain atmospheric inversions; (iv) improvements of transport models and the representation of photochemical sinks in top-down inversions; and (v) development of a 3D variational inversion system using isotopic and/or co-emitted species such as ethane to improve source partitioning. The data presented here can be downloaded from https://doi.org/10.18160/GCP-CH4-2019 (Saunois et al., 2020) and from the Global Carbon Project.


2009 ◽  
Vol 6 (10) ◽  
pp. 1949-1959 ◽  
Author(s):  
C. Gerbig ◽  
A. J. Dolman ◽  
M. Heimann

Abstract. Estimating carbon exchange at regional scales is paramount to understanding feedbacks between climate and the carbon cycle, but also to verifying climate change mitigation such as emission reductions and strategies compensating for emissions such as carbon sequestration. This paper discusses evidence for a number of important shortcomings of current generation modelling frameworks designed to provide regional scale budgets from atmospheric observations. Current top-down and bottom-up approaches targeted at deriving consistent regional scale carbon exchange estimates for biospheric and anthropogenic sources and sinks are hampered by a number of issues: we show that top-down constraints using point measurements made from tall towers, although sensitive to larger spatial scales, are however influenced by local areas much more strongly than previously thought. On the other hand, classical bottom-up approaches using process information collected at the local scale, such as from eddy covariance data, need up-scaling and validation on larger scales. We therefore argue for a combination of both approaches, implicitly providing the important local scale information for the top-down constraint, and providing the atmospheric constraint for up-scaling of flux measurements. Combining these data streams necessitates quantifying their respective representation errors, which are discussed. The impact of these findings on future network design is highlighted, and some recommendations are given.


2017 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter. Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test to which extent expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the Inverse Distance Weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down setup relying on parameter and process constraints, and an experimentalists' setup based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed. The simulation results showed that: (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up setup performed better than the top-down one when simulating short-duration events, but similarly to the top-down setup when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up setup can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down setup seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of "model realism" differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


Sign in / Sign up

Export Citation Format

Share Document