scholarly journals Complexity continuum within Ising formulation of NP problems

2020 ◽  
Author(s):  
Kirill Kalinin ◽  
Natalia Berloff

Abstract A promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem for certain interaction matrix classes, yet not all problem instances are equivalently hard to optimise. We propose to identify computationally simple instances with an `optimisation simplicity criterion'. Such optimisation simplicity can be found for a wide range of models from spin glasses to k-regular maximum cut problems. Many optical, photonic, and electronic systems are neuromorphic architectures that can naturally operate to optimise problems satisfying this criterion and, therefore, such problems are often chosen to illustrate the computational advantages of new Ising machines. We further probe an intermediate complexity for sparse and dense models by analysing circulant coupling matrices, that can be `rewired' to introduce greater complexity. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.

2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Kirill P. Kalinin ◽  
Natalia G. Berloff

AbstractA promising approach to achieve computational supremacy over the classical von Neumann architecture explores classical and quantum hardware as Ising machines. The minimisation of the Ising Hamiltonian is known to be NP-hard problem yet not all problem instances are equivalently hard to optimise. Given that the operational principles of Ising machines are suited to the structure of some problems but not others, we propose to identify computationally simple instances with an ‘optimisation simplicity criterion’. Neuromorphic architectures based on optical, photonic, and electronic systems can naturally operate to optimise instances satisfying this criterion, which are therefore often chosen to illustrate the computational advantages of new Ising machines. As an example, we show that the Ising model on the Möbius ladder graph is ‘easy’ for Ising machines. By rewiring the Möbius ladder graph to random 3-regular graphs, we probe an intermediate computational complexity between P and NP-hard classes with several numerical methods. Significant fractions of polynomially simple instances are further found for a wide range of small size models from spin glasses to maximum cut problems. A compelling approach for distinguishing easy and hard instances within the same NP-hard class of problems can be a starting point in developing a standardised procedure for the performance evaluation of emerging physical simulators and physics-inspired algorithms.


2020 ◽  
Author(s):  
Eleonora Diamanti ◽  
Inda Setyawati ◽  
Spyridon Bousis ◽  
leticia mojas ◽  
lotteke Swier ◽  
...  

Here, we report on the virtual screening, design, synthesis and structure–activity relationships (SARs) of the first class of selective, antibacterial agents against the energy-coupling factor (ECF) transporters. The ECF transporters are a family of transmembrane proteins involved in the uptake of vitamins in a wide range of bacteria. Inhibition of the activity of these proteins could reduce the viability of pathogens that depend on vitamin uptake. Because of their central role in the metabolism of bacteria and their absence in humans, ECF transporters are novel potential antimicrobial targets to tackle infection. The hit compound’s metabolic and plasma stability, the potency (20, MIC Streptococcus pneumoniae = 2 µg/mL), the absence of cytotoxicity and a lack of resistance development under the conditions tested here suggest that this scaffold may represent a promising starting point for the development of novel antimicrobial agents with an unprecedented mechanism of action.<br>


Algorithms ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 187
Author(s):  
Aaron Barbosa ◽  
Elijah Pelofske ◽  
Georg Hahn ◽  
Hristo N. Djidjev

Quantum annealers, such as the device built by D-Wave Systems, Inc., offer a way to compute solutions of NP-hard problems that can be expressed in Ising or quadratic unconstrained binary optimization (QUBO) form. Although such solutions are typically of very high quality, problem instances are usually not solved to optimality due to imperfections of the current generations quantum annealers. In this contribution, we aim to understand some of the factors contributing to the hardness of a problem instance, and to use machine learning models to predict the accuracy of the D-Wave 2000Q annealer for solving specific problems. We focus on the maximum clique problem, a classic NP-hard problem with important applications in network analysis, bioinformatics, and computational chemistry. By training a machine learning classification model on basic problem characteristics such as the number of edges in the graph, or annealing parameters, such as the D-Wave’s chain strength, we are able to rank certain features in the order of their contribution to the solution hardness, and present a simple decision tree which allows to predict whether a problem will be solvable to optimality with the D-Wave 2000Q. We extend these results by training a machine learning regression model that predicts the clique size found by D-Wave.


2021 ◽  
Vol 13 (3) ◽  
pp. 1589
Author(s):  
Juan Sánchez-Fernández ◽  
Luis-Alberto Casado-Aranda ◽  
Ana-Belén Bastidas-Manzano

The limitations of self-report techniques (i.e., questionnaires or surveys) in measuring consumer response to advertising stimuli have necessitated more objective and accurate tools from the fields of neuroscience and psychology for the study of consumer behavior, resulting in the creation of consumer neuroscience. This recent marketing sub-field stems from a wide range of disciplines and applies multiple types of techniques to diverse advertising subdomains (e.g., advertising constructs, media elements, or prediction strategies). Due to its complex nature and continuous growth, this area of research calls for a clear understanding of its evolution, current scope, and potential domains in the field of advertising. Thus, this current research is among the first to apply a bibliometric approach to clarify the main research streams analyzing advertising persuasion using neuroimaging. Particularly, this paper combines a comprehensive review with performance analysis tools of 203 papers published between 1986 and 2019 in outlets indexed by the ISI Web of Science database. Our findings describe the research tools, journals, and themes that are worth considering in future research. The current study also provides an agenda for future research and therefore constitutes a starting point for advertising academics and professionals intending to use neuroimaging techniques.


1990 ◽  
Vol 45 (2) ◽  
pp. 81-94
Author(s):  
Julian Ławrynowicz ◽  
Katarzyna Kędzia ◽  
Leszek Wojtczak

AbstractA complex analytical method of solving the generalised Dirac-Maxwell system has recently been proposed by two of us for a certain class of complex Riemannian metrics. The Dirac equation without the field potential in such a metric appeared to be equivalent to the Dirac-Maxwell system including the field potentials produced by the currents of a particle in question. The method proposed is connected with applying the Fourier transform with respect to the electric charge treated as a variable, with the consideration of the mass as an eigenvalue, and with solving suitable convolution equations. In the present research an explicit calculation based on linearization of the spinor connections is given. The conditions for the motion are interpreted as a starting point to seek selection rules for curved space-times corresponding to actually existing particles. Then the same method is applied to solids. Namely, by a suitable transformation of the configuration space in terms of elements of the interaction matrix corresponding to the Coulomb, exchange, and dipole integrals, the interaction term in the hamiltonian becomes zero, thus leading to experimentally verificable formulae for the autocorrelation time


2019 ◽  
Vol 35 (8) ◽  
pp. 879-915 ◽  
Author(s):  
Bona Lu ◽  
Yan Niu ◽  
Feiguo Chen ◽  
Nouman Ahmad ◽  
Wei Wang ◽  
...  

Abstract Gas-solid fluidization is intrinsically dynamic and manifests mesoscale structures spanning a wide range of length and timescales. When involved with reactions, more complex phenomena emerge and thus pose bigger challenges for modeling. As the mesoscale is critical to understand multiphase reactive flows, which the conventional two-fluid model without mesoscale modeling may be inadequate to resolve even using extremely fine grids, this review attempts to demonstrate that the energy-minimization multiscale (EMMS) model could be a starting point to develop such mesoscale modeling. Then, the EMMS-based mesoscale modeling with emphasis on formulation of drag coefficients for different fluidization regimes, modification of mass transfer coefficient, and other extensions are discussed in an attempt to resolve the emerging challenges. Its applications with examples of development of novel fluid catalytic cracking and methanol-to-olefins processes prove that the mesoscale modeling plays a remarkable role in improving the predictions in hydrodynamic behaviors and overall reaction rate. However, the product content primarily depends on the chemical kinetic model itself, suggesting the necessity of an effective coupling between chemical kinetics and flow characteristics. The mesoscale modeling can be believed to accelerate the traditional experimental-based scale-up process with much lower cost in the future.


2002 ◽  
Vol 11 (3) ◽  
pp. 096369350201100
Author(s):  
E.M. Gravel ◽  
T.D. Papathanasiou

Dual porosity fibrous media are important in a number of applications, ranging from bioreactor design and transport in living systems to composites manufacturing. In the present study we are concerned with the development of predictive models for the hydraulic permeability ( Kp) of various arrays of fibre bundles. For this we carry out extensive computations for viscous flow through arrays of fibre bundles using the Boundary Element Method (BEM) implemented on a multi-processor computer. Up to 350 individual filaments, arranged in square or hexagonal packing within bundles, which are also arranged in square of hexagonal packing, are included in each simulation. These are simple but not trivial models for fibrous preforms used in composites manufacturing – dual porosity systems characterised by different inter- and intra-tow porosities. The way these porosities affect the hydraulic permeability of such media is currently unknown and is elucidated through our simulations. Following numerical solution of the governing equations, ( Kp) is calculated from the computed flowrate through Darcy's law and is expressed as function of the inter- and intra-tow porosities (φ, φt) and of the filament radius ( Rf). Numerical results are also compared to analytical models. The latter form the starting point in the development of a dimensionless correlation for the permeability of such dual porosity media. It is found that the numerically computed permeabilities follow that correlation for a wide range of φ i, φt and Rf.


Author(s):  
Carlo Alberto De Bernardi ◽  
Enrico Miglierina

AbstractThe 2-sets convex feasibility problem aims at finding a point in the nonempty intersection of two closed convex sets A and B in a Hilbert space H. The method of alternating projections is the simplest iterative procedure for finding a solution and it goes back to von Neumann. In the present paper, we study some stability properties for this method in the following sense: we consider two sequences of closed convex sets $$\{A_n\}$$ { A n } and $$\{B_n\}$$ { B n } , each of them converging, with respect to the Attouch-Wets variational convergence, respectively, to A and B. Given a starting point $$a_0$$ a 0 , we consider the sequences of points obtained by projecting on the “perturbed” sets, i.e., the sequences $$\{a_n\}$$ { a n } and $$\{b_n\}$$ { b n } given by $$b_n=P_{B_n}(a_{n-1})$$ b n = P B n ( a n - 1 ) and $$a_n=P_{A_n}(b_n)$$ a n = P A n ( b n ) . Under appropriate geometrical and topological assumptions on the intersection of the limit sets, we ensure that the sequences $$\{a_n\}$$ { a n } and $$\{b_n\}$$ { b n } converge in norm to a point in the intersection of A and B. In particular, we consider both when the intersection $$A\cap B$$ A ∩ B reduces to a singleton and when the interior of $$A \cap B$$ A ∩ B is nonempty. Finally we consider the case in which the limit sets A and B are subspaces.


2021 ◽  
Author(s):  
Farhan Ali ◽  

Thinking creatively, is a necessary condition of the Design process to transform ideas into novel solutions and break barriers to creativity. Although, there are many techniques and ways to stimulate creative thinking for designers, however, this research paper adopts SCAMPER; which is acronym of: Substitute- Combine-Adapt- Modify or Magnify-Put to another use-Eliminate-Reverse or Rearrange- to integrate the sustainability concepts within architectural design process. Many creative artifacts have been designed consciously or unconsciously adopting SCAMPER strategies such as rehabilitation and reuse projects to improve the functional performance or the aesthetic sense of an existing building for the better. SCAMPER is recognized as a divergent thinking tool are used during the initial ideation stage, aims to leave the usual way of thinking to generate a wide range of new ideas that will lead to new insights, original ideas, and creative solutions to problems. The research focuses on applying this method in the architectural design, which is rarely researched, through reviewing seven examples that have been designed consciously or unconsciously adopting SCAMPER mnemonic techniques. The paper aims to establish a starting point for further research to deepen it and study its potentials in solving architectural design problems.


Author(s):  
Carlo Cenciarelli

Walkman and iPod devices have often been discussed in quasi-cinematic terms. This typically implies an analogy between the personal stereo user and the transcendental subject of film theory, who is allowed to see and hear without being seen or heard. This chapter offers an alternative route. Taking as starting point a cinematic moment in which iPod listening is turned into a first-person voiceover, it suggests that cinematic and personal stereo listening share not only an orientation towards privatization and individualization but also a fantasy of communication: one that blurs the lines between “self” and “other” and between listening and speaking. Analyzing a wide range of films and historical marketing campaigns by Sony and Apple, the chapter shows how mainstream cinema—through its representational tropes and modes of spectatorial address—feeds into a broader cultural construction of personal stereo listening as a highly individualized activity that is always imaginatively open-ended.


Sign in / Sign up

Export Citation Format

Share Document