algorithmic complexity
Recently Published Documents


TOTAL DOCUMENTS

269
(FIVE YEARS 48)

H-INDEX

23
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Andy E Williams

This comment is in reply to the paper “On the complexity of extending the convergence region for Traub’s method” [1]. In complexity science, from the mathematical perspective, discussions of complexity often concern algorithmic complexity such as in the paper responded to here [2]. But this is only one of the kinds of complexity that exists even in the mathematical domain. There is also the complexity in the behavior of a system of equations; there is the complexity of the reasoning or algorithm required to understand a system of equations (“understand” interpreted here as defining the problem needing to be solved); and, as mentioned, there is the complexity of the reasoning or algorithm required to solve a system of equations.


2021 ◽  
Vol 6 (3) ◽  
pp. 11
Author(s):  
Adonney Allan de Oliveira Veras

The data volume produced by the omic sciences nowadays was driven by the adoption of new generation sequencing platforms, popularly called NGS (Next Generation Sequencing). Among the analysis performed with this data, we can mention: mapping, genome assembly, genome annotation, pangenomic analysis, quality control, redundancy removal, among others. When it comes to redundancy removal analysis, it is worth noting the existence of several tools that perform this task, with proven accuracy through their scientific publications, but they lack criteria related to algorithmic complexity. Thus, this work aims to perform an algorithmic complexity analysis in computational tools for removing redundancy of raw reads from the DNA sequencing process, through empirical analysis. The analysis was performed with sixteen raw reads datasets. The datasets were processed with the following tools: MarDRe, NGSReadsTreatment, ParDRe, FastUniq, and BioSeqZip, and analyzed using the R statistical platform, through the GuessCompx package. The results demonstrate that the BioSeqZip and ParDRe tools present less complexity in this analysis


2021 ◽  
pp. 1-25
Author(s):  
Tran Nguyen Minh-Thai ◽  
Sandhya Samarasinghe ◽  
Michael Levin

Abstract Many biological organisms regenerate structure and function after damage. Despite the long history of research on molecular mechanisms, many questions remain about algorithms by which cells can cooperate towards the same invariant morphogenetic outcomes. Therefore, conceptual frameworks are needed not only for motivating hypotheses for advancing the understanding of regeneration processes in living organisms, but also for regenerative medicine and synthetic biology. Inspired by planarian regeneration, this study offers a novel generic conceptual framework that hypothesizes mechanisms and algorithms by which cell collectives may internally represent an anatomical target morphology towards which they build after damage. Further, the framework contributes a novel nature-inspired computing method for self-repair in engineering and robotics. Our framework, based on past in vivo and in silico studies on planaria, hypothesizes efficient novel mechanisms and algorithms to achieve complete and accurate regeneration of a simple in silico flatwormlike organism from any damage, much like the body-wide immortality of planaria, with minimal information and algorithmic complexity. This framework that extends our previous circular tissue repair model integrates two levels of organization: tissue and organism. In Level 1, three individual in silico tissues (head, body, and tail—each with a large number of tissue cells and a single stem cell at the centre) repair themselves through efficient local communications. Here, the contribution extends our circular tissue model to other shapes and invests them with tissue-wide immortality through an information field holding the minimum body plan. In Level 2, individual tissues combine to form a simple organism. Specifically, the three stem cells form a network that coordinates organism-wide regeneration with the help of Level 1. Here we contribute novel concepts for collective decision-making by stem cells for stem cell regeneration and large-scale recovery. Both levels (tissue cells and stem cells) represent networks that perform simple neural computations and form a feedback control system. With simple and limited cellular computations, our framework minimises computation and algorithmic complexity to achieve complete recovery. We report results from computer simulations of the framework to demonstrate its robustness in recovering the organism after any injury. This comprehensive hypothetical framework that significantly extends the existing biological regeneration models offers a new way to conceptualise the information-processing aspects of regeneration, which may also help design living and non-living self-repairing agents.


2021 ◽  
pp. 171-219
Author(s):  
Alberto Hernández-Espinosa ◽  
Hector Zenil ◽  
Narsis A. Kiani ◽  
Jesper Tegnér

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Morteza Noshad ◽  
Jerome Choi ◽  
Yuming Sun ◽  
Alfred Hero ◽  
Ivo D. Dinov

AbstractData-driven innovation is propelled by recent scientific advances, rapid technological progress, substantial reductions of manufacturing costs, and significant demands for effective decision support systems. This has led to efforts to collect massive amounts of heterogeneous and multisource data, however, not all data is of equal quality or equally informative. Previous methods to capture and quantify the utility of data include value of information (VoI), quality of information (QoI), and mutual information (MI). This manuscript introduces a new measure to quantify whether larger volumes of increasingly more complex data enhance, degrade, or alter their information content and utility with respect to specific tasks. We present a new information-theoretic measure, called Data Value Metric (DVM), that quantifies the useful information content (energy) of large and heterogeneous datasets. The DVM formulation is based on a regularized model balancing data analytical value (utility) and model complexity. DVM can be used to determine if appending, expanding, or augmenting a dataset may be beneficial in specific application domains. Subject to the choices of data analytic, inferential, or forecasting techniques employed to interrogate the data, DVM quantifies the information boost, or degradation, associated with increasing the data size or expanding the richness of its features. DVM is defined as a mixture of a fidelity and a regularization terms. The fidelity captures the usefulness of the sample data specifically in the context of the inferential task. The regularization term represents the computational complexity of the corresponding inferential method. Inspired by the concept of information bottleneck in deep learning, the fidelity term depends on the performance of the corresponding supervised or unsupervised model. We tested the DVM method for several alternative supervised and unsupervised regression, classification, clustering, and dimensionality reduction tasks. Both real and simulated datasets with weak and strong signal information are used in the experimental validation. Our findings suggest that DVM captures effectively the balance between analytical-value and algorithmic-complexity. Changes in the DVM expose the tradeoffs between algorithmic complexity and data analytical value in terms of the sample-size and the feature-richness of a dataset. DVM values may be used to determine the size and characteristics of the data to optimize the relative utility of various supervised or unsupervised algorithms.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 169
Author(s):  
Laurent Bulteau ◽  
Guillaume Fertin ◽  
Géraldine Jean ◽  
Christian Komusiewicz

A multi-cut rearrangement of a string S is a string S′ obtained from S by an operation called k-cut rearrangement, that consists of (1) cutting S at a given number k of places in S, making S the concatenated string X1·X2·X3·…·Xk·Xk+1, where X1 and Xk+1 are possibly empty, and (2) rearranging the Xis so as to obtain S′=Xπ(1)·Xπ(2)·Xπ(3)·…·Xπ(k+1), π being a permutation on 1,2,…,k+1 satisfying π(1)=1 and π(k+1)=k+1. Given two strings S and T built on the same multiset of characters from an alphabet Σ, the Sorting by Multi-Cut Rearrangements (SMCR) problem asks whether a given number ℓ of k-cut rearrangements suffices to transform S into T. The SMCR problem generalizes several classical genomic rearrangements problems, such as Sorting by Transpositions and Sorting by Block Interchanges. It may also model chromoanagenesis, a recently discovered phenomenon consisting of massive simultaneous rearrangements. In this paper, we study the SMCR problem from an algorithmic complexity viewpoint. More precisely, we investigate its classical and parameterized complexity, as well as its approximability, in the general case or when S and T are permutations.


Sign in / Sign up

Export Citation Format

Share Document