scholarly journals Heuristics and optimal solutions to the breadth–depth dilemma

2020 ◽  
Vol 117 (33) ◽  
pp. 19799-19808
Author(s):  
Rubén Moreno-Bote ◽  
Jorge Ramírez-Ruiz ◽  
Jan Drugowitsch ◽  
Benjamin Y. Hayden

In multialternative risky choice, we are often faced with the opportunity to allocate our limited information-gathering capacity between several options before receiving feedback. In such cases, we face a natural trade-off between breadth—spreading our capacity across many options—and depth—gaining more information about a smaller number of options. Despite its broad relevance to daily life, including in many naturalistic foraging situations, the optimal strategy in the breadth–depth trade-off has not been delineated. Here, we formalize the breadth–depth dilemma through a finite-sample capacity model. We find that, if capacity is small (∼10 samples), it is optimal to draw one sample per alternative, favoring breadth. However, for larger capacities, a sharp transition is observed, and it becomes best to deeply sample a very small fraction of alternatives, which roughly decreases with the square root of capacity. Thus, ignoring most options, even when capacity is large enough to shallowly sample all of them, is a signature of optimal behavior. Our results also provide a rich casuistic for metareasoning in multialternative decisions with bounded capacity using close-to-optimal heuristics.

2020 ◽  
Author(s):  
Rubén Moreno-Bote ◽  
Jorge Ramírez-Ruiz ◽  
Jan Drugowitsch ◽  
Benjamin Y. Hayden

AbstractDecision-makers are often faced with limited information about the outcomes of their choices. Current formalizations of uncertain choice, such as the explore-exploit dilemma, do not apply well to decisions in which search capacity can be allocated to each option in variable amounts. Such choices confront decision-makers with the need to tradeoff between breadth - allocating a small amount of capacity to each of many options – and depth - focusing capacity on a few options. We formalize the breadth-depth dilemma through a finite sample capacity model. We find that, if capacity is smaller than 4-7 samples, it is optimal to draw one sample per alternative, favoring breadth. However, for larger capacities, a sharp transition is observed, and it becomes best to deeply sample a very small fraction of alternatives, that decreases with the square root of capacity. Thus, ignoring most options, even when capacity is large enough to shallowly sample all of them, reflects a signature of optimal behavior. Our results also provide a rich casuistic for metareasoning in multi-alternative decisions with bounded capacity.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jeonghyuk Park ◽  
Yul Ri Chung ◽  
Seo Taek Kong ◽  
Yeong Won Kim ◽  
Hyunho Park ◽  
...  

AbstractThere have been substantial efforts in using deep learning (DL) to diagnose cancer from digital images of pathology slides. Existing algorithms typically operate by training deep neural networks either specialized in specific cohorts or an aggregate of all cohorts when there are only a few images available for the target cohort. A trade-off between decreasing the number of models and their cancer detection performance was evident in our experiments with The Cancer Genomic Atlas dataset, with the former approach achieving higher performance at the cost of having to acquire large datasets from the cohort of interest. Constructing annotated datasets for individual cohorts is extremely time-consuming, with the acquisition cost of such datasets growing linearly with the number of cohorts. Another issue associated with developing cohort-specific models is the difficulty of maintenance: all cohort-specific models may need to be adjusted when a new DL algorithm is to be used, where training even a single model may require a non-negligible amount of computation, or when more data is added to some cohorts. In resolving the sub-optimal behavior of a universal cancer detection model trained on an aggregate of cohorts, we investigated how cohorts can be grouped to augment a dataset without increasing the number of models linearly with the number of cohorts. This study introduces several metrics which measure the morphological similarities between cohort pairs and demonstrates how the metrics can be used to control the trade-off between performance and the number of models.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Mariana Souza Rocha ◽  
Luiz Célio Souza Rocha ◽  
Marcia Barreto da Silva Feijó ◽  
Paula Luiza Limongi dos Santos Marotta ◽  
Samanta Cardozo Mourão

PurposeThe mucilage of the Linum usitatissimum L. seed (Linseed) is one of the natural mucilages that presents a great potential to provide a food hydrocolloid with potential applications in both food and pharmaceutical industries. To increase the yield and quality of linseed oil during its production process, it is necessary to previously extract its polysaccharides. Because of this, flax mucilage production can be made viable as a byproduct of oil extraction process, which is already a product of high commercial value consolidated in the market. Thus, the purpose of this work is to optimize the mucilage extraction process of L. usitatissimum L. using the normal-boundary intersection (NBI) multiobjective optimization method.Design/methodology/approachCurrently, the variables of the process of polysaccharide extraction from different sources are optimized using the response surface methodology. However, when the optimal points of the responses are conflicting it is necessary to study the best conditions to achieve a balance between these conflicting objectives (trade-offs) and to explore the available options it is necessary to formulate an optimization problem with multiple objectives. The multiobjective optimization method used in this work was the NBI developed to find uniformly distributed and continuous Pareto optimal solutions for a nonlinear multiobjective problem.FindingsThe optimum extraction point to obtain the maximum fiber concentration in the extracted material was pH 3.81, temperature of 46°C, time of 13.46 h. The maximum extraction yield of flaxseed was pH 6.45, temperature of 65°C, time of 14.41 h. This result confirms the trade-off relationship between the objectives. NBI approach was able to find uniformly distributed Pareto optimal solutions, which allows to analyze the behavior of the trade-off relationship. Thus, the decision-maker can set extraction conditions to achieve desired characteristics in mucilage.Originality/valueThe novelty of this paper is to confirm the existence of a trade-off relationship between the productivity parameter (yield) and the quality parameter (fiber concentration in the extracted material) during the flaxseed mucilage extraction process. The NBI approach was able to find uniformly distributed Pareto optimal solutions, which allows us to analyze the behavior of the trade-off relationship. This allows the decision-making to the extraction conditions according to the desired characteristics of the final product, thus being able to direct the extraction for the best applicability of the mucilage.


2021 ◽  
Vol 17 (9) ◽  
pp. e1009217
Author(s):  
David Meder ◽  
Finn Rabe ◽  
Tobias Morville ◽  
Kristoffer H. Madsen ◽  
Magnus T. Koudahl ◽  
...  

Ergodicity describes an equivalence between the expectation value and the time average of observables. Applied to human behaviour, ergodic theories of decision-making reveal how individuals should tolerate risk in different environments. To optimise wealth over time, agents should adapt their utility function according to the dynamical setting they face. Linear utility is optimal for additive dynamics, whereas logarithmic utility is optimal for multiplicative dynamics. Whether humans approximate time optimal behavior across different dynamics is unknown. Here we compare the effects of additive versus multiplicative gamble dynamics on risky choice. We show that utility functions are modulated by gamble dynamics in ways not explained by prevailing decision theories. Instead, as predicted by time optimality, risk aversion increases under multiplicative dynamics, distributing close to the values that maximise the time average growth of in-game wealth. We suggest that our findings motivate a need for explicitly grounding theories of decision-making on ergodic considerations.


Author(s):  
Tipwimol Sooktip ◽  
Naruemon Wattanapongsakorn

In multi-objective optimization problem, a set of optimal solutions is obtained from an optimization algorithm. There are many trade-off optimal solutions. However, in practice, a decision maker or user only needs one or very few solutions for implementation. Moreover, these solutions are difficult to determine from a set of optimal solutions of complex system. Therefore, a trade-off method for multi-objective optimization is proposed for identifying the preferred solutions according to the decision maker’s preference. The preference is expressed by using the trade-off between any two objectives where the decision maker is willing to worsen in one objective value in order to gain improvement in the other objective value. The trade-off method is demonstrated by using well-known two-objective and three-objective benchmark problems. Furthermore, a system design problem with component allocation is also considered to illustrate the applicability of the proposed method. The results show that the trade-off method can be applied for solving practical problems to identify the final solution(s) and easy to use even when the decision maker lacks some knowledge or not an expert in the problem solving. The decision maker only gives his/her preference information.  Then, the corresponding optimal solutions will be obtained, accordingly.


Sign in / Sign up

Export Citation Format

Share Document