practical algorithms
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 48)

H-INDEX

29
(FIVE YEARS 3)

2021 ◽  
Vol 11 (4) ◽  
pp. 12-25
Author(s):  
E. Yu. Zakharova ◽  
S. V. Mikhailova ◽  
V. V. Zarubina ◽  
N. A. Krasnoshchekova ◽  
N. L. Pechatnikova ◽  
...  

Treatment of many of the diseases in the panel of expanded newborn screening includes dietary therapy. Glutaric aciduria type 1 (GA1) is a hereditary disorder caused by mutations in the gene GCDH, encoding glutaryl‑CoA dehydrogenase, an enzyme in the amino acid metabolic pathways. The decreased activity of the enzyme leads to accumulation of neuro‑ toxic metabolites. The recommended treatment approaches for GA1 are the prescription of specialized nutrition products, levocarnitine, and symptomatic management. In 2021, clinical guidelines for the treatment of this rear disease were published in Russian Federation. To provide for the timely treatment, it is essential for a practitioner involved in the care patients with such a rare disorder as GA1 to have the knowledge of the principles of management, as well as practical algorithms for diet calculation.The article gives a detailed case‑based description of management during metabolic decompensation and the choice of dietary therapy for GA1 patients of different age groups.


2021 ◽  
Author(s):  
Alberto Vera ◽  
Siddhartha Banerjee ◽  
Samitha Samaranayake

Motivated by the needs of modern transportation service platforms, we study the problem of computing constrained shortest paths (CSP) at scale via preprocessing techniques. Our work makes two contributions in this regard: 1) We propose a scalable algorithm for CSP queries and show how its performance can be parametrized in terms of a new network primitive, the constrained highway dimension. This development extends recent work that established the highway dimension as the appropriate primitive for characterizing the performance of unconstrained shortest-path (SP) algorithms. Our main theoretical contribution is deriving conditions relating the two notions, thereby providing a characterization of networks where CSP and SP queries are of comparable hardness. 2) We develop practical algorithms for scalable CSP computation, augmenting our theory with additional network clustering heuristics. We evaluate these algorithms on real-world data sets to validate our theoretical findings. Our techniques are orders of magnitude faster than existing approaches while requiring only limited additional storage and preprocessing.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1481
Author(s):  
Yang Sun ◽  
Hangdong Zhao ◽  
Jonathan Scarlett

In recent years, neural network based image priors have been shown to be highly effective for linear inverse problems, often significantly outperforming conventional methods that are based on sparsity and related notions. While pre-trained generative models are perhaps the most common, it has additionally been shown that even untrained neural networks can serve as excellent priors in various imaging applications. In this paper, we seek to broaden the applicability and understanding of untrained neural network priors by investigating the interaction between architecture selection, measurement models (e.g., inpainting vs. denoising vs. compressive sensing), and signal types (e.g., smooth vs. erratic). We motivate the problem via statistical learning theory, and provide two practical algorithms for tuning architectural hyperparameters. Using experimental evaluations, we demonstrate that the optimal hyperparameters may vary significantly between tasks and can exhibit large performance gaps when tuned for the wrong task. In addition, we investigate which hyperparameters tend to be more important, and which are robust to deviations from the optimum.


Author(s):  
Manuel Iori ◽  
Vinícius Loti de Lima ◽  
Silvano Martello ◽  
Michele Monaci

AbstractTwo-dimensional cutting and packing problems model a large number of relevant industrial applications.The literature on practical algorithms for such problems is very large. We introduce the , a library on two-dimensional orthogonal cutting and packing problems. The library makes available, in a unified format, 25 benchmarks from the literature for a total of over 3000 instances, provides direct links to surveys and typologies, and includes a list of relevant links.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Ali Mohammad Norouzzadeh Gil Molk ◽  
Mohammad Reza Aref ◽  
Reza Ramazani Khorshiddoust

The technology world is developing fast with the developments made in the hardware and software areas. Considering that privacy and security of telemedicine applications are among the main necessities of this industry, as a result, there is a need to use lightweight and practical algorithms to be used in applications in the field of telemedicine, while security have the least negative impact. The distinct and contradicting components in the design and implementation of the cryptography algorithm, to achieve various objectives in medicine-based applications, have made it a complicated system. It is natural that, without identifying the components, indices, and properties of each system component, the hardware and software resources are lost and a proper algorithm cannot be designed. Accordingly, this paper presents a leveled model of cryptography algorithms using the cybernetic method. First, the main objectives and measures in the design of the cryptography algorithms are extracted using the measure reduction methods, and some of the excess and overlapping measures are eliminated. Then, three general classes of the cryptography algorithm design and implementation measures, applications of cryptography algorithms, and cryptography implementation techniques are extracted. Since the complexity of the cryptography algorithm design is relatively high, the cybernetic methodology is used to present a supermodel to make the cryptography algorithm design objective. Such design prevents examining unnecessary details and establishes a bidirectional relationship between the main design and implementation process and the support process. This relationship provides the support requirements of the main process by the support process at each step. Finally, the Q-analysis tools are used to analyse the proposed method, and the efficiency results are represented.


Author(s):  
M. V. Sprindzuk ◽  
L. P. Titov ◽  
A. P. Konchits ◽  
L. V. Mozharovskaya

Analysis of bioinformatics data is an actual problem in modern computational biology and applied mathematics. With the development of biotechnology and tools for obtaining and processing such information, unresolved issues of the development and application of new algorithms and software have emerged.Authors propose practical algorithms and methods for processing transcriptomic data for efficient results of annotation, visualization and interpretation of bioinformatics data.


2021 ◽  
Vol 26 ◽  
pp. 1-40
Author(s):  
Wolfgang Fischl ◽  
Georg Gottlob ◽  
Davide Mario Longo ◽  
Reinhard Pichler

To cope with the intractability of answering Conjunctive Queries (CQs) and solving Constraint Satisfaction Problems (CSPs), several notions of hypergraph decompositions have been proposed—giving rise to different notions of width, noticeably, plain, generalized, and fractional hypertree width (hw, ghw, and fhw). Given the increasing interest in using such decomposition methods in practice, a publicly accessible repository of decomposition software, as well as a large set of benchmarks, and a web-accessible workbench for inserting, analyzing, and retrieving hypergraphs are called for. We address this need by providing (i) concrete implementations of hypergraph decompositions (including new practical algorithms), (ii) a new, comprehensive benchmark of hypergraphs stemming from disparate CQ and CSP collections, and (iii) HyperBench, our new web-interface for accessing the benchmark and the results of our analyses. In addition, we describe a number of actual experiments we carried out with this new infrastructure.


2021 ◽  
pp. 027836492199278
Author(s):  
Luke Shimanuki ◽  
Brian Axelrod

We consider the problem of motion planning in the presence of uncertain obstacles, modeled as polytopes with Gaussian-distributed faces (PGDFs). A number of practical algorithms exist for motion planning in the presence of known obstacles by constructing a graph in configuration space, then efficiently searching the graph to find a collision-free path. We show that such an exact algorithm is unlikely to be practical in the domain with uncertain obstacles. In particular, we show that safe 2D motion planning among PGDF obstacles is [Formula: see text]-hard with respect to the number of obstacles, and remains [Formula: see text]-hard after being restricted to a graph. Our reduction is based on a path encoding of MAXQHORNSAT and uses the risk of collision with an obstacle to encode variable assignments and literal satisfactions. This implies that, unlike in the known case, planning under uncertainty is hard, even when given a graph containing the solution. We further show by reduction from [Formula: see text]-SAT that both safe 3D motion planning among PGDF obstacles and the related minimum constraint removal problem remain [Formula: see text]-hard even when restricted to cases where each obstacle overlaps with at most a constant number of other obstacles.


Sign in / Sign up

Export Citation Format

Share Document