scholarly journals Computational complexity continuum within Ising formulation of NP problems

2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Kirill P. Kalinin ◽  
Natalia G. Berloff

AbstractA promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem yet not all problem instances are equivalently hard to optimise. Given that the operational principles of Ising machines are suited to the structure of some problems but not others, we propose to identify computationally simple instances with an ‘optimisation simplicity criterion’. Neuromorphic architectures based on optical, photonic, and electronic systems can naturally operate to optimise instances satisfying this criterion, which are therefore often chosen to illustrate the computational advantages of new Ising machines. As an example, we show that the Ising model on the Möbius ladder graph is ‘easy’ for Ising machines. By rewiring the Möbius ladder graph to random 3-regular graphs, we probe an intermediate computational complexity between P and NP-hard classes with several numerical methods. Significant fractions of polynomially simple instances are further found for a wide range of small size models from spin glasses to maximum cut problems. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.

2020 ◽  
Author(s):  
Kirill Kalinin ◽  
Natalia Berloff

Abstract A promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem for certain interaction matrix classes, yet not all problem instances are equivalently hard to optimise. We propose to identify computationally simple instances with an `optimisation simplicity criterion'. Such optimisation simplicity can be found for a wide range of models from spin glasses to k-regular maximum cut problems. Many optical, photonic, and electronic systems are neuromorphic architectures that can naturally operate to optimise problems satisfying this criterion and, therefore, such problems are often chosen to illustrate the computational advantages of new Ising machines. We further probe an intermediate complexity for sparse and dense models by analysing circulant coupling matrices, that can be `rewired' to introduce greater complexity. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.


2020 ◽  
Author(s):  
Eleonora Diamanti ◽  
Inda Setyawati ◽  
Spyridon Bousis ◽  
leticia mojas ◽  
lotteke Swier ◽  
...  

Here, we report on the virtual screening, design, synthesis and structure–activity relationships (SARs) of the first class of selective, antibacterial agents against the energy-coupling factor (ECF) transporters. The ECF transporters are a family of transmembrane proteins involved in the uptake of vitamins in a wide range of bacteria. Inhibition of the activity of these proteins could reduce the viability of pathogens that depend on vitamin uptake. Because of their central role in the metabolism of bacteria and their absence in humans, ECF transporters are novel potential antimicrobial targets to tackle infection. The hit compound’s metabolic and plasma stability, the potency (20, MIC Streptococcus pneumoniae = 2 µg/mL), the absence of cytotoxicity and a lack of resistance development under the conditions tested here suggest that this scaffold may represent a promising starting point for the development of novel antimicrobial agents with an unprecedented mechanism of action.<br>


1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


2021 ◽  
Vol 13 (3) ◽  
pp. 1589
Author(s):  
Juan Sánchez-Fernández ◽  
Luis-Alberto Casado-Aranda ◽  
Ana-Belén Bastidas-Manzano

The limitations of self-report techniques (i.e., questionnaires or surveys) in measuring consumer response to advertising stimuli have necessitated more objective and accurate tools from the fields of neuroscience and psychology for the study of consumer behavior, resulting in the creation of consumer neuroscience. This recent marketing sub-field stems from a wide range of disciplines and applies multiple types of techniques to diverse advertising subdomains (e.g., advertising constructs, media elements, or prediction strategies). Due to its complex nature and continuous growth, this area of research calls for a clear understanding of its evolution, current scope, and potential domains in the field of advertising. Thus, this current research is among the first to apply a bibliometric approach to clarify the main research streams analyzing advertising persuasion using neuroimaging. Particularly, this paper combines a comprehensive review with performance analysis tools of 203 papers published between 1986 and 2019 in outlets indexed by the ISI Web of Science database. Our findings describe the research tools, journals, and themes that are worth considering in future research. The current study also provides an agenda for future research and therefore constitutes a starting point for advertising academics and professionals intending to use neuroimaging techniques.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-20
Author(s):  
Sushmita Gupta ◽  
Pranabendu Misra ◽  
Saket Saurabh ◽  
Meirav Zehavi

An input to the P OPULAR M ATCHING problem, in the roommates setting (as opposed to the marriage setting), consists of a graph G (not necessarily bipartite) where each vertex ranks its neighbors in strict order, known as its preference. In the P OPULAR M ATCHING problem the objective is to test whether there exists a matching M * such that there is no matching M where more vertices prefer their matched status in M (in terms of their preferences) over their matched status in M *. In this article, we settle the computational complexity of the P OPULAR M ATCHING problem in the roommates setting by showing that the problem is NP-complete. Thus, we resolve an open question that has been repeatedly and explicitly asked over the last decade.


2019 ◽  
Vol 35 (8) ◽  
pp. 879-915 ◽  
Author(s):  
Bona Lu ◽  
Yan Niu ◽  
Feiguo Chen ◽  
Nouman Ahmad ◽  
Wei Wang ◽  
...  

Abstract Gas-solid fluidization is intrinsically dynamic and manifests mesoscale structures spanning a wide range of length and timescales. When involved with reactions, more complex phenomena emerge and thus pose bigger challenges for modeling. As the mesoscale is critical to understand multiphase reactive flows, which the conventional two-fluid model without mesoscale modeling may be inadequate to resolve even using extremely fine grids, this review attempts to demonstrate that the energy-minimization multiscale (EMMS) model could be a starting point to develop such mesoscale modeling. Then, the EMMS-based mesoscale modeling with emphasis on formulation of drag coefficients for different fluidization regimes, modification of mass transfer coefficient, and other extensions are discussed in an attempt to resolve the emerging challenges. Its applications with examples of development of novel fluid catalytic cracking and methanol-to-olefins processes prove that the mesoscale modeling plays a remarkable role in improving the predictions in hydrodynamic behaviors and overall reaction rate. However, the product content primarily depends on the chemical kinetic model itself, suggesting the necessity of an effective coupling between chemical kinetics and flow characteristics. The mesoscale modeling can be believed to accelerate the traditional experimental-based scale-up process with much lower cost in the future.


2002 ◽  
Vol 11 (3) ◽  
pp. 096369350201100
Author(s):  
E.M. Gravel ◽  
T.D. Papathanasiou

Dual porosity fibrous media are important in a number of applications, ranging from bioreactor design and transport in living systems to composites manufacturing. In the present study we are concerned with the development of predictive models for the hydraulic permeability ( Kp) of various arrays of fibre bundles. For this we carry out extensive computations for viscous flow through arrays of fibre bundles using the Boundary Element Method (BEM) implemented on a multi-processor computer. Up to 350 individual filaments, arranged in square or hexagonal packing within bundles, which are also arranged in square of hexagonal packing, are included in each simulation. These are simple but not trivial models for fibrous preforms used in composites manufacturing – dual porosity systems characterised by different inter- and intra-tow porosities. The way these porosities affect the hydraulic permeability of such media is currently unknown and is elucidated through our simulations. Following numerical solution of the governing equations, ( Kp) is calculated from the computed flowrate through Darcy's law and is expressed as function of the inter- and intra-tow porosities (φ, φt) and of the filament radius ( Rf). Numerical results are also compared to analytical models. The latter form the starting point in the development of a dimensionless correlation for the permeability of such dual porosity media. It is found that the numerically computed permeabilities follow that correlation for a wide range of φ i, φt and Rf.


Author(s):  
Carlo Alberto De Bernardi ◽  
Enrico Miglierina

AbstractThe 2-sets convex feasibility problem aims at finding a point in the nonempty intersection of two closed convex sets A and B in a Hilbert space H. The method of alternating projections is the simplest iterative procedure for finding a solution and it goes back to von Neumann. In the present paper, we study some stability properties for this method in the following sense: we consider two sequences of closed convex sets $$\{A_n\}$$ { A n } and $$\{B_n\}$$ { B n } , each of them converging, with respect to the Attouch-Wets variational convergence, respectively, to A and B. Given a starting point $$a_0$$ a 0 , we consider the sequences of points obtained by projecting on the “perturbed” sets, i.e., the sequences $$\{a_n\}$$ { a n } and $$\{b_n\}$$ { b n } given by $$b_n=P_{B_n}(a_{n-1})$$ b n = P B n ( a n - 1 ) and $$a_n=P_{A_n}(b_n)$$ a n = P A n ( b n ) . Under appropriate geometrical and topological assumptions on the intersection of the limit sets, we ensure that the sequences $$\{a_n\}$$ { a n } and $$\{b_n\}$$ { b n } converge in norm to a point in the intersection of A and B. In particular, we consider both when the intersection $$A\cap B$$ A ∩ B reduces to a singleton and when the interior of $$A \cap B$$ A ∩ B is nonempty. Finally we consider the case in which the limit sets A and B are subspaces.


2021 ◽  
Author(s):  
Farhan Ali ◽  

Thinking creatively, is a necessary condition of the Design process to transform ideas into novel solutions and break barriers to creativity. Although, there are many techniques and ways to stimulate creative thinking for designers, however, this research paper adopts SCAMPER; which is acronym of: Substitute- Combine-Adapt- Modify or Magnify-Put to another use-Eliminate-Reverse or Rearrange- to integrate the sustainability concepts within architectural design process. Many creative artifacts have been designed consciously or unconsciously adopting SCAMPER strategies such as rehabilitation and reuse projects to improve the functional performance or the aesthetic sense of an existing building for the better. SCAMPER is recognized as a divergent thinking tool are used during the initial ideation stage, aims to leave the usual way of thinking to generate a wide range of new ideas that will lead to new insights, original ideas, and creative solutions to problems. The research focuses on applying this method in the architectural design, which is rarely researched, through reviewing seven examples that have been designed consciously or unconsciously adopting SCAMPER mnemonic techniques. The paper aims to establish a starting point for further research to deepen it and study its potentials in solving architectural design problems.


Sign in / Sign up

Export Citation Format

Share Document