scholarly journals Supplementary material to "Global evaluation of runoff from ten state-of-the-art hydrological models"

Author(s):  
Hylke E. Beck ◽  
Albert I. J. M. van Dijk ◽  
Ad de Roo ◽  
Emanuel Dutra ◽  
Gabriel Fink ◽  
...  
2013 ◽  
Vol 176 ◽  
pp. 38-49 ◽  
Author(s):  
Theodore J. Bohn ◽  
Ben Livneh ◽  
Jared W. Oyler ◽  
Steve W. Running ◽  
Bart Nijssen ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 5700-5708 ◽  
Author(s):  
Jianghao Shen ◽  
Yue Wang ◽  
Pengfei Xu ◽  
Yonggan Fu ◽  
Zhangyang Wang ◽  
...  

While increasingly deep networks are still in general desired for achieving state-of-the-art performance, for many specific inputs a simpler network might already suffice. Existing works exploited this observation by learning to skip convolutional layers in an input-dependent manner. However, we argue their binary decision scheme, i.e., either fully executing or completely bypassing one layer for a specific input, can be enhanced by introducing finer-grained, “softer” decisions. We therefore propose a Dynamic Fractional Skipping (DFS) framework. The core idea of DFS is to hypothesize layer-wise quantization (to different bitwidths) as intermediate “soft” choices to be made between fully utilizing and skipping a layer. For each input, DFS dynamically assigns a bitwidth to both weights and activations of each layer, where fully executing and skipping could be viewed as two “extremes” (i.e., full bitwidth and zero bitwidth). In this way, DFS can “fractionally” exploit a layer's expressive power during input-adaptive inference, enabling finer-grained accuracy-computational cost trade-offs. It presents a unified view to link input-adaptive layer skipping and input-adaptive hybrid quantization. Extensive experimental results demonstrate the superior tradeoff between computational cost and model expressive power (accuracy) achieved by DFS. More visualizations also indicate a smooth and consistent transition in the DFS behaviors, especially the learned choices between layer skipping and different quantizations when the total computational budgets vary, validating our hypothesis that layer quantization could be viewed as intermediate variants of layer skipping. Our source code and supplementary material are available at https://github.com/Torment123/DFS.


2017 ◽  
Author(s):  
Roye Rozov ◽  
Gil Goldshlager ◽  
Eran Halperin ◽  
Ron Shamir

AbstractMotivationWe present Faucet, a 2-pass streaming algorithm for assembly graph construction. Faucet builds an assembly graph incrementally as each read is processed. Thus, reads need not be stored locally, as they can be processed while downloading data and then discarded. We demonstrate this functionality by performing streaming graph assembly of publicly available data, and observe that the ratio of disk use to raw data size decreases as coverage is increased.ResultsFaucet pairs the de Bruijn graph obtained from the reads with additional meta-data derived from them. We show these metadata - coverage counts collected at junction k-mers and connections bridging between junction pairs - contain most salient information needed for assembly, and demonstrate they enable cleaning of metagenome assembly graphs, greatly improving contiguity while maintaining accuracy. We compared Faucet’s resource use and assembly quality to state of the art metagenome assemblers, as well as leading resource-efficient genome assemblers. Faucet used orders of magnitude less time and disk space than the specialized metagenome assemblers MetaSPAdes and Megahit, while also improving on their memory use; this broadly matched performance of other assemblers optimizing resource efficiency - namely, Minia and LightAssembler. However, on metagenomes tested, Faucet’s outputs had 14-110% higher mean NGA50 lengths compared to Minia, and 2-11-fold higher mean NGA50 lengths compared to LightAssembler, the only other streaming assembler available.AvailabilityFaucet is available at https://github.com/Shamir-Lab/[email protected],[email protected] information:Supplementary data are available at Bioinformatics online.


2019 ◽  
Author(s):  
Jamie Towner ◽  
Hannah L. Cloke ◽  
Ervin Zsoter ◽  
Zachary Flamig ◽  
Jannis M. Hoch ◽  
...  

2021 ◽  
Author(s):  
Robert Reinecke ◽  
Francesca Pianosi ◽  
Thorsten Wagener

<div> <p>Global hydrologic models have become an important research tool in assessing global water resources and hydrologic hazards in a changing environment, and for improving our understanding of how the water cycle is affected by climatic changes worldwide. These complex models have been developed over more than 20 years by multiple research groups, and valuable efforts like ISIMIP (Inter-Sectoral Impact Model Intercomparison Project) contribute to our growing understanding of model uncertainties and differences. However, due to their complexity and vast data outputs, they remain a Blackbox to certain extents. Especially for processes that are poorly constrained by available observations – like groundwater recharge – model results vary largely, and it is unclear what processes dominate where and when. With the inclusion of even more sophisticated implementations e.g., coupled global gradient-based groundwater simulations, it is getting more and more challenging to understand and attribute these models' results. </p> </div><div> <p>In this talk, we argue that we need to intensify the efforts in investigating uncertainties within these models, including where they originate and how they propagate. We need to carefully and extensively examine where different processes drive the model results by applying state of the art sensitivity analysis methods. To this end, we discuss development needs and describe pathways to foster the application of sensitivity analysis methods to global hydrological models.     </p> </div>


Sign in / Sign up

Export Citation Format

Share Document