Hide and vanish: Data sets where the most parsimonious tree is known but hard to find, and their implications for tree search methods

2014 ◽  
Vol 79 ◽  
pp. 118-131 ◽  
Author(s):  
Pablo A. Goloboff
Author(s):  
Shogo Takeuchi ◽  
Tomoyuki Kaneko ◽  
Kazunori Yamaguchi

2019 ◽  
Author(s):  
Paula Breitling ◽  
Alexandros Stamatakis ◽  
Olga Chernomor ◽  
Ben Bettisworth ◽  
Lukasz Reszczynski

AbstractTerraces in phylogenetic tree space are, among other things, important for the design of tree space search strategies. While the phenomenon of phylogenetic terraces is already known for unlinked partition models on partitioned phylogenomic data sets, it has not yet been studied if an analogous structure is present under linked and scaled partition models. To this end, we analyze aspects such as the log-likelihood distributions, likelihood-based significance tests, and nearest neighborhood interchanges on the trees residing on a terrace and compare their distributions among unlinked, linked, and scaled partition models. Our study shows that there exists a terrace-like structure under linked and scaled partition models as well. We denote this phenomenon as quasi-terrace. Therefore quasi-terraces should be taken into account in the design of tree search algorithms as well as when reporting results on ‘the’ final tree topology in empirical phylogenetic studies.


2018 ◽  
Author(s):  
Jan H. Jensen

This paper presents a comparison of a graph-based genetic algorithm (GB-GA) and machine learning (ML) results for the optimisation of logP values with a constraint for synthetic accessibility and shows that GA is as good or better than the ML approaches for this particular property. The molecules found by GB-GA bear little resemblance to the molecules used to construct the initial mating pool, indicating that the GB-GA approach can traverse a relatively large distance in chemical space using relatively few (50) generations. The paper also introduces a new non-ML graph-based generative model (GB-GM) that can be parameterized using very small data sets and combined with a Monte Carlo tree search (MCTS) algorithm. The results are comparable to previously published results (Sci. Technol. Adv. Mater. 2017, 18, 972-976) using a recurrent neural network (RNN) generative model, while the GB-GM-based method is orders of magnitude faster. The MCTS results seem more dependent on the composition of the training set than the GA approach for this particular property. Our results suggest that the performance of new ML-based generative models should be compared to more traditional, and often simpler, approaches such a GA.


Sign in / Sign up

Export Citation Format

Share Document