scholarly journals Bottom-up synthesis of recursive functional programs using angelic execution

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-29
Author(s):  
Anders Miltner ◽  
Adrian Trejo Nuñez ◽  
Ana Brendel ◽  
Swarat Chaudhuri ◽  
Isil Dillig

We present a novel bottom-up method for the synthesis of functional recursive programs. While bottom-up synthesis techniques can work better than top-down methods in certain settings, there is no prior technique for synthesizing recursive programs from logical specifications in a purely bottom-up fashion. The main challenge is that effective bottom-up methods need to execute sub-expressions of the code being synthesized, but it is impossible to execute a recursive subexpression of a program that has not been fully constructed yet. In this paper, we address this challenge using the concept of angelic semantics. Specifically, our method finds a program that satisfies the specification under angelic semantics (we refer to this as angelic synthesis), analyzes the assumptions made during its angelic execution, uses this analysis to strengthen the specification, and finally reattempts synthesis with the strengthened specification. Our proposed angelic synthesis algorithm is based on version space learning and therefore deals effectively with many incremental synthesis calls made during the overall algorithm. We have implemented this approach in a prototype called Burst and evaluate it on synthesis problems from prior work. Our experiments show that Burst is able to synthesize a solution to 94% of the benchmarks in our benchmark suite, outperforming prior work.

2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2015 ◽  
Vol 28 (1) ◽  
pp. 237-240
Author(s):  
Gary Chartier
Keyword(s):  
Top Down ◽  

Peter T. Leeson's Anarchy Unbound offers an interesting collection of historical and theoretical arguments for the view that bottom-up social order is perfectly possible and at least sometimes preferable to order imposed from the top down.


Author(s):  
Joran W. Booth ◽  
Abihnav K. Bhasin ◽  
Tahira Reid ◽  
Karthik Ramani

The purpose of this study is to continue to explore which function identification methods work best for specific design tasks. Prior literature describes the top-down and bottom-up approaches as equivalent methods for functional decomposition. Building on our prior work, this study tests the bottom-up method against the top-down and enumeration methods. We used a 3-factor within-subject study (n=136). While most of our diagram-oriented metrics were not statistically different, we found statistical support that: 1.) students reported that the dissection activity was more useful when using bottom-up, and 2.) that student engineers committed many more syntax errors when using the bottom-up method (by listing parts instead of functions). We believe that both these results are due to the increased focus on individual parts. We do not know if an increased attention to the parts would increase novelty or fixation, and recommend future studies to find out.


2017 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter. Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test to which extent expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the Inverse Distance Weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down setup relying on parameter and process constraints, and an experimentalists' setup based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed. The simulation results showed that: (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up setup performed better than the top-down one when simulating short-duration events, but similarly to the top-down setup when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up setup can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down setup seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of "model realism" differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


Author(s):  
Edyta Sasin ◽  
Daryl Fougnie

AbstractDoes the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants’ memory for objects seen during visual search. We compared implicit memory for two types of objects—related-context nontargets that grabbed attention because they matched the target defining feature (i.e., color; top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). In Experiment 1, the salient distractor flickered, while in Experiment 2, the luminance of the salient distractor was alternated. Critically, salient and related-context nontargets produced equivalent attentional capture, yet related-context nontargets were remembered far better than salient distractors (and salient distractors were not remembered better than unrelated distractors). These results suggest that LTM depends not only on the amount of attention but also on the type of attention. Specifically, top-down attention is more effective in promoting the formation of memory traces than bottom-up attention.


2006 ◽  
Vol 23 (5) ◽  
pp. 377-405 ◽  
Author(s):  
Marcus T. Pearce ◽  
Geraint A. Wiggins

The Implication-Realization (IR) theory (Narmour, 1990) posits two cognitive systems involved in the generation of melodic expectations: The first consists of a limited number of symbolic rules that are held to be innate and universal; the second reflects the top-down influences of acquired stylistic knowledge. Aspects of both systems have been implemented as quantitative models in research which has yielded empirical support for both components of the theory (Cuddy & Lunny, 1995; Krumhansl, 1995a, 1995b; Schellenberg, 1996, 1997). However, there is also evidence that the implemented bottom-up rules constitute too inflexible a model to account for the influence of the musical experience of the listener and the melodic context in which expectations are elicited. A theory is presented, according to which both bottom-up and top-down descriptions of observed patterns of melodic expectation may be accounted for in terms of the induction of statistical regularities in existing musical repertoires. A computational model that embodies this theory is developed and used to reanalyze existing experimental data on melodic expectancy. The results of three experiments with increasingly complex melodic stimuli demonstrate that this model is capable of accounting for listeners’ expectations as well as or better than the two-factor model of Schellenberg (1997).


2004 ◽  
Vol 58 (5) ◽  
pp. 594-599 ◽  
Author(s):  
G. Weinstock-Zlotnick ◽  
J. Hinojosa
Keyword(s):  
Top Down ◽  

PsycCRITIQUES ◽  
2005 ◽  
Vol 50 (19) ◽  
Author(s):  
Michael Cole
Keyword(s):  
Top Down ◽  

Sign in / Sign up

Export Citation Format

Share Document