heuristic strategies
Recently Published Documents


TOTAL DOCUMENTS

138
(FIVE YEARS 50)

H-INDEX

12
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Qianli Yang ◽  
Zhongqiao Lin ◽  
Wenyi Zhang ◽  
Jianshu Li ◽  
Xiyuan Chen ◽  
...  

Humans can often handle daunting tasks with ease by developing a set of strategies to reduce decision making into simpler problems. The ability to use heuristic strategies demands an advanced level of intelligence and has not been demonstrated in animals. Here, we trained macaque monkeys to play the classic video game Pac-Man. The monkeys' decision-making may be described with a strategy-based hierarchical decision-making model with over 90% accuracy. The model reveals that the monkeys adopted the take-the-best heuristic by using one dominating strategy for their decision-making at a time and formed compound strategies by assembling the basis strategies to handle particular game situations. With the model, the computationally complex but fully quantifiable Pac-Man behavior paradigm provides a new approach to understanding animals' advanced cognition.


Computers ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 122
Author(s):  
Ruslan Kuchumov ◽  
Vladimir Korkhov

Applications in high-performance computing (HPC) may not use all available computational resources, leaving some of them underutilized. By co-scheduling, i.e., running more than one application on the same computational node, it is possible to improve resource utilization and overall throughput. Some applications may have conflicting requirements on resources and co-scheduling may cause performance degradation, so it is important to take it into account in scheduling decisions. In this paper, we formalize the co-scheduling problem and propose multiple scheduling strategies to solve it: an optimal strategy, an online strategy and heuristic strategies. These strategies vary in terms of the optimality of the solution they produce and a priori information about the system they require. We show theoretically that the online strategy provides schedules with a competitive ratio that has a constant upper limit. This allows us to solve the co-scheduling problem using heuristic strategies that approximate this online strategy. Numerical simulations show how heuristic strategies compare to the optimal strategy for different input systems. We propose a method for measuring input parameters of the model in practice and evaluate this method on HPC benchmark applications. We show the high accuracy of the measurement method, which allows us to apply the proposed scheduling strategies in the scheduler implementation.


2021 ◽  
Author(s):  
Diksha Gupta ◽  
Carlos D Brody

Trial history biases in decision-making tasks are thought to reflect systematic updates of decision variables, therefore their precise nature informs conclusions about underlying heuristic strategies and learning processes. However, random drifts in decision variables can corrupt this inference by mimicking the signatures of systematic updates. Hence, identifying the trial-by-trial evolution of decision variables requires methods that can robustly account for such drifts. Recent studies (Lak 20, Mendonça 20) have made important advances in this direction, by proposing a convenient method to correct for the influence of slow drifts in decision criterion, a key decision variable. Here we apply this correction to a variety of updating scenarios, and evaluate its performance. We show that the correction fails for a wide range of commonly assumed systematic updating strategies, distorting one's inference away from the veridical strategies towards a narrow subset. To address these limitations, we propose a model-based approach for disambiguating systematic updates from random drifts, and demonstrate its success on real and synthetic datasets. We show that this approach accurately recovers the latent trajectory of drifts in decision criterion as well as the generative systematic updates from simulated data. Our results offer recommendations for methods to account for the interactions between history biases and slow drifts, and highlight the advantages of incorporating assumptions about the generative process directly into models of decision-making.


2021 ◽  
Vol 118 (37) ◽  
pp. e2111190118
Author(s):  
Sen Pei ◽  
Fredrik Liljeros ◽  
Jeffrey Shaman

Antimicrobial-resistant organisms (AMROs) can colonize people without symptoms for long periods of time, during which these agents can spread unnoticed to other patients in healthcare systems. The accurate identification of asymptomatic spreaders of AMRO in hospital settings is essential for supporting the design of interventions against healthcare-associated infections (HAIs). However, this task remains challenging because of limited observations of colonization and the complicated transmission dynamics occurring within hospitals and the broader community. Here, we study the transmission of methicillin-resistant Staphylococcus aureus (MRSA), a prevalent AMRO, in 66 Swedish hospitals and healthcare facilities with inpatients using a data-driven, agent-based model informed by deidentified real-world hospitalization records. Combining the transmission model, patient-to-patient contact networks, and sparse observations of colonization, we develop and validate an individual-level inference approach that estimates the colonization probability of individual hospitalized patients. For both model-simulated and historical outbreaks, the proposed method supports the more accurate identification of asymptomatic MRSA carriers than other traditional approaches. In addition, in silica control experiments indicate that interventions targeted to inpatients with a high-colonization probability outperform heuristic strategies informed by hospitalization history and contact tracing.


Author(s):  
Ruslan Kuchumov ◽  
Vladimir Korkhov

Applications in high-performance computing (HPC) may not use all available computational resources, leaving some of them underutilized. By co-scheduling, i.e. running more than one application on the same computational node, it is possible to improve resource utilization and overall throughput. Some applications may have conflicting requirements on resources and co-scheduling may cause performance degradation, so it is important to take it into account in scheduling decisions. In this paper, we formalized co-scheduling problem and proposed multiple scheduling strategies to solve it: an optimal strategy, an online strategy and heuristic strategies. These strategies vary in terms of the optimality of the solution they produce and a priori information about the system they require. We showed theoretically that the online strategy provides schedules with a competitive ratio that has a constant upper limit. This allowed us to solve the co-scheduling problem using heuristic strategies that approximate this online strategy. Numerical simulations showed how heuristic strategies compare to the optimal strategy for different input systems. We proposed a method for measuring input parameters of the model in practice and evaluated this method on HPC benchmark applications. We showed high accuracy of measurement method, which allows to apply proposed scheduling strategies in scheduler implementation.


2021 ◽  
pp. 2796-2812
Author(s):  
Nishath Ansari

     Feature selection, a method of dimensionality reduction, is nothing but collecting a range of appropriate feature subsets from the total number of features. In this paper, a point by point explanation review about the feature selection in this segment preferred affairs and its appraisal techniques are discussed. I will initiate my conversation with a straightforward approach so that we consider taking care of features and preferred issues depending upon meta-heuristic strategy. These techniques help in obtaining the best highlight subsets. Thereafter, this paper discusses some system models that drive naturally from the environment are discussed and calculations are performed so that we can take care of the preferred feature matters in complex and massive data. Here, furthermore, I discuss algorithms like the genetic algorithm (GA), the Non-Dominated Sorting Genetic Algorithm (NSGA-II), Particle Swarm Optimization (PSO), and some other meta-heuristic strategies for considering the provisional separation of issues. A comparison of these algorithms has been performed; the results show that the feature selection technique benefits machine learning algorithms by improving the performance of the algorithm. This paper also presents various real-world applications of using feature selection.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2053
Author(s):  
Ahmed Ginidi ◽  
Abdallah Elsayed ◽  
Abdullah Shaheen ◽  
Ehab Elattar ◽  
Ragab El-Sehiemy

This paper proposes a hybrid algorithm that combines two prominent nature-inspired meta-heuristic strategies to solve the combined heat and power (CHP) economic dispatch. In this line, an innovative hybrid heap-based and jellyfish search algorithm (HBJSA) is developed to enhance the performance of two recent algorithms: heap-based algorithm (HBA) and jellyfish search algorithm (JSA). The proposed hybrid HBJSA seeks to make use of the explorative features of HBA and the exploitative features of the JSA to overcome some of the problems found in their standard forms. The proposed hybrid HBJSA, HBA, and JSA are validated and statistically compared by attempting to solve a real-world optimization issue of the CHP economic dispatch. It aims to satisfy the power and heat demands and minimize the whole fuel cost (WFC) of the power and heat generation units. Additionally, a series of operational and electrical constraints such as non-convex feasible operating regions of CHP and valve-point effects of power-only plants, respectively, are considered in solving such a problem. The proposed hybrid HBJSA, HBA, and JSA are employed on two medium systems, which are 24-unit and 48-unit systems, and two large systems, which are 84- and 96-unit systems. The experimental results demonstrate that the proposed hybrid HBJSA outperforms the standard HBA and JSA and other reported techniques when handling the CHP economic dispatch. Otherwise, comparative analyses are carried out to demonstrate the suggested HBJSA’s strong stability and robustness in determining the lowest minimum, average, and maximum WFC values compared to the HBA and JSA.


2021 ◽  
Author(s):  
Kleber Padovani ◽  
Roberto Xavier ◽  
André Carvalho ◽  
Anna Reali ◽  
Annie Chateau ◽  
...  

Abstract Genome assembly is one of the most relevant and computationally complex tasks in genomics projects. It aims to reconstruct a genome through the analysis of several small textual fragments of such genome — named reads. Ideally, besides ignoring any errors contained in reads, the reconstructed genome should also optimally combine these reads, thus reaching the original genome. The quality of the genome assembly is relevant because the more reliable the genomes, the more accurate the understanding of the characteristics and functions of living beings, and it allows generating many positive impacts on society, including the prevention and treatment of diseases. The assembly becomes even more complex (and it is termed de novo in this case) when the assembler software is not supplied with a similar genome to be used as a reference. Current assemblers have predominantly used heuristic strategies on computational graphs. Despite being widely used in genomics projects, there is still no irrefutably best assembler for any genome, and the proper choice of these assemblers and their configurations depends on Bioinformatics experts. The use of reinforcement learning has proven to be very promising for solving complex activities without human supervision during their learning process. However, their successful applications are predominantly focused on fictional and entertainment problems-such as games. Based on the above, this work aims to shed light on the application of reinforcement learning to solve this relevant real-world problem, the genome assembly. By expanding the only approach found in the literature that addresses this problem, we carefully explored the aspects of intelligent agent learning, performed by the Q-learning algorithm, to understand its suitability to be applied in scenarios whose characteristics are more similar to those faced by real genome projects. The improvements proposed here include changing the previously proposed reward system and including state space exploration optimization strategies based on dynamic pruning and mutual collaboration with evolutionary computing. These investigations were tried on 23 new environments with larger inputs than those used previously. All these environments are freely available on the internet for the evolution of this research by the scientific community. The results suggest consistent performance progress using the proposed improvements, however, they also demonstrate the limitations of them, especially related to the high dimensionality of state and action spaces. We also present, later, the paths that can be traced to tackle genome assembly efficiently in real scenarios considering recent, successfully reinforcement learning applications — including deep reinforcement learning — from other domains dealing with high-dimensional inputs.


Sign in / Sign up

Export Citation Format

Share Document