problem size
Recently Published Documents


TOTAL DOCUMENTS

306
(FIVE YEARS 89)

H-INDEX

30
(FIVE YEARS 4)

2022 ◽  
Author(s):  
D. Rhodri Davies ◽  
Stephan C. Kramer ◽  
Siavash Ghelichkhan ◽  
Angus Gibson

Abstract. Firedrake is an automated system for solving partial differential equations using the finite element method. By applying sophisticated performance optimisations through automatic code-generation techniques, it provides a means to create accurate, efficient, flexible, easily extensible, scalable, transparent and reproducible research software, that is ideally suited to simulating a wide-range of problems in geophysical fluid dynamics. Here, we demonstrate the applicability of Firedrake for geodynamical simulation, with a focus on mantle dynamics. The accuracy and efficiency of the approach is confirmed via comparisons against a suite of analytical and benchmark cases of systematically increasing complexity, whilst parallel scalability is demonstrated up to 12288 compute cores, where the problem size and the number of processing cores are simultaneously increased. In addition, Firedrake's flexibility is highlighted via straightforward application to different physical (e.g. complex nonlinear rheologies, compressibility) and geometrical (2-D and 3-D Cartesian and spherical domains) scenarios. Finally, a representative simulation of global mantle convection is examined, which incorporates 230 Myr of plate motion history as a kinematic surface boundary condition, confirming its suitability for addressing research problems at the frontiers of global mantle dynamics research.


Author(s):  
Jannik Burre ◽  
Dominik Bongartz ◽  
Alexander Mitsos

AbstractSuperstructure optimization is a powerful but computationally demanding task that can be used to select the optimal structure among many alternatives within a single optimization. In chemical engineering, such problems naturally arise in process design, where different process alternatives need to be considered simultaneously to minimize a specific objective function (e.g., production costs or global warming impact). Conventionally, superstructure optimization problems are either formulated with the Big-M or the Convex Hull reformulation approach. However, for problems containing nonconvex functions, it is not clear whether these yield the most computationally efficient formulations. We therefore compare the conventional problem formulations with less common ones (using equilibrium constraints, step functions, or multiplications of binary and continuous variables to model disjunctions) using three case studies. First, a minimalist superstructure optimization problem is used to derive conjectures about their computational performance. These conjectures are then further investigated by two more complex literature benchmarks. Our analysis shows that the less common approaches tend to result in a smaller problem size, while keeping relaxations comparably tight—despite the introduction of additional nonconvexities. For the considered case studies, we demonstrate that all reformulation approaches can further benefit from eliminating optimization variables by a reduced-space formulation. For superstructure optimization problems containing nonconvex functions, we therefore encourage to also consider problem formulations that introduce additional nonconvexities but reduce the number of optimization variables.


Author(s):  
Vivek Saraswat ◽  
Udayan Ganguly

Abstract Emerging non-volatile memories have been proposed for a wide range of applications, from easing the von-Neumann bottleneck to neuromorphic applications. Specifically, scalable RRAMs based on Pr1-xCaxMnO3 (PCMO) exhibit analog switching have been demonstrated as an integrating neuron, an analog synapse, and a voltage-controlled oscillator. More recently, the inherent stochasticity of memristors has been proposed for efficient hardware implementations of Boltzmann Machines. However, as the problem size scales, the number of neurons increases and controlling the stochastic distribution tightly over many iterations is necessary. This requires parametric control over stochasticity. Here, we characterize the stochastic Set in PCMO RRAMs. We identify that the Set time distribution depends on the internal state of the device (i.e., resistance) in addition to external input (i.e., voltage pulse). This requires the confluence of contradictory properties like stochastic switching as well as deterministic state control in the same device. Unlike ‘stochastic-everywhere’ filamentary memristors, in PCMO RRAMs, we leverage the (i) stochastic Set in negative polarity and (ii) deterministic analog Reset in positive polarity to demonstrate 100× reduced Set time distribution drift. The impact on Boltzmann Machines’ performance is analyzed and as opposed to the “fixed external input stochasticity”, the “state-monitored stochasticity” can solve problems 20× larger in size. State monitoring also tunes out the device-to-device variability effect on distributions providing 10× better performance. In addition to the physical insights, this study establishes the use of experimental stochasticity in PCMO RRAMs in stochastic recurrent neural networks reliably over many iterations.


Author(s):  
Kaike Zhang ◽  
Xueping Li ◽  
Mingzhou Jin

This study generalizes the r-interdiction median (RIM) problem with fortification to simultaneously consider two types of risks: probabilistic exogenous disruptions and endogenous disruptions caused by intentional attacks. We develop a bilevel programming model that includes a lower-level interdiction problem and a higher-level fortification problem to hedge against such risks. We then prove that the interdiction problem is supermodular and subsequently adopt the cuts associated with supermodularity to develop an efficient cutting-plane algorithm to achieve exact solutions. For the fortification problem, we adopt the logic-based Benders decomposition (LBBD) framework to take advantage of the two-level structure and the property that a facility should not be fortified if it is not attacked at the lower level. Numerical experiments show that the cutting-plane algorithm is more efficient than benchmark methods in the literature, especially when the problem size grows. Specifically, with regard to the solution quality, LBBD outperforms the greedy algorithm in the literature with an up-to 13.2% improvement in the total cost, and it is as good as or better than the tree-search implicit enumeration method. Summary of Contribution: This paper studies an r-interdiction median problem with fortification (RIMF) in a supply chain network that simultaneously considers two types of disruption risks: random disruptions that occur probabilistically and disruptions caused by intentional attacks. The problem is to determine the allocation of limited facility fortification resources to an existing network. It is modeled as a bilevel programming model combining a defender’s problem and an attacker’s problem, which generalizes the r-interdiction median problem with probabilistic fortification. This paper is suitable for IJOC in mainly two aspects: (1) The lower-level attacker’s interdiction problem is a challenging high-degree nonlinear model. In the literature, only a total enumeration method has been applied to solve a special case of this problem. By exploring the special structural property of the problem, namely, the supermodularity of the transportation cost function, we developed an exact cutting-plane method to solve the problem to its optimality. Extensive numerical studies were conducted. Hence, this paper fits in the intersection of operations research and computing. (2) We developed an efficient logic-based Benders decomposition algorithm to solve the higher-level defender’s fortification problem. Overall, this study generalizes several important problems in the literature, such as RIM, RIMF, and RIMF with probabilistic fortification (RIMF-p).


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Timothée Leleu ◽  
Farad Khoyratee ◽  
Timothée Levi ◽  
Ryan Hamerly ◽  
Takashi Kohno ◽  
...  

AbstractThe development of physical simulators, called Ising machines, that sample from low energy states of the Ising Hamiltonian has the potential to transform our ability to understand and control complex systems. However, most of the physical implementations of such machines have been based on a similar concept that is closely related to relaxational dynamics such as in simulated, mean-field, chaotic, and quantum annealing. Here we show that dynamics that includes a nonrelaxational component and is associated with a finite positive Gibbs entropy production rate can accelerate the sampling of low energy states compared to that of conventional methods. By implementing such dynamics on field programmable gate array, we show that the addition of nonrelaxational dynamics that we propose, called chaotic amplitude control, exhibits exponents of the scaling with problem size of the time to find optimal solutions and its variance that are smaller than those of relaxational schemes recently implemented on Ising machines.


2021 ◽  
Vol 9 ◽  
Author(s):  
Siddharth Jain

The traveling salesman problem is a well-known NP-hard problem in combinatorial optimization. This paper shows how to solve it on an Ising Hamiltonian based quantum annealer by casting it as a quadratic unconstrained binary optimization (QUBO) problem. Results of practical experiments are also presented using D-Wave’s 5,000 qubit Advantage 1.1 quantum annealer and the performance is compared to a classical solver. It is found the quantum annealer can only handle a problem size of 8 or less nodes and its performance is subpar compared to the classical solver both in terms of time and accuracy.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012137
Author(s):  
F Lucchini ◽  
N Marconato

Abstract In this paper, a comparison between two current-based Integral Equations approaches for eddy current problems is presented. In particular, the very well-known and widely adopted loop-current formulation (or electric vector potential formulation) is compared to the less common J-φ formulation. Pros and cons of the two formulations with respect to the problem size are discussed, as well as the adoption of low-rank approximation techniques. Although rarely considered in the literature, it is shown that the J-φ formulation may offer some useful advantages when large problems are considered. Indeed, for large-scale problems, while the computational efforts required by the two formulations are comparable, the J-φ formulation does not require any particular attention when non-simply connected domains are considered.


Author(s):  
Asma Elyounsi ◽  
Hatem Tlijani ◽  
Mohamed Salim Bouhlel

Traditional neural networks are very diverse and have been used during the last decades in the fields of data classification. These networks like MLP, back propagation neural networks (BPNN) and feed forward network have shown inability to scale with problem size and with the slow convergence rate. So in order to overcome these numbers of drawbacks, the use of higher order neural networks (HONNs) becomes the solution by adding input units along with a stronger functioning of other neural units in the network and transforms easily these input units to hidden layers. In this paper, a new metaheuristic method, Firefly (FFA), is applied to calculate the optimal weights of the Functional Link Artificial Neural Network (FLANN) by using the flashing behavior of fireflies in order to classify ISA-Radar target. The average classification result of FLANN-FFA which reached 96% shows the efficiency of the process compared to other tested methods.


2021 ◽  
Author(s):  
Ilaria Berteletti ◽  
Sarah E. Kimbley ◽  
SaraBeth Sullivan ◽  
Lorna C Quandt ◽  
Makoto Miyakoshi

In this study, we investigate the impact of experience with a signed language on the neurocognitive processes recruited by adults solving single-digit arithmetic problems. We use event-related potentials (ERPs) to identify the components that are modulated by problem size and operation type in Deaf American Sign Language (ASL) native signers as well as in hearing English-speaking participants. Participants were presented with subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large with an additional extra-large subtraction condition to equate the overall magnitude with large multiplication problems. Results show overall comparable behavioral results across groups and similar ERP dissociations between operation types. First, an early operation type effect is observed between 180ms and 210ms post problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, on the posterior-occipital component between 240ms and 300ms, similarly for both groups only subtraction problems show modulation with problem size suggesting that only this category recruit quantity-related processes. Control analyses exclude this effect as being perceptual and magnitude related. These results are the first evidence that the two operations rely on distinct cognitive processes within the ASL native signing population and this distinction is equivalent to the one observed in the English-speaking population.


2021 ◽  
Author(s):  
Christian Kroer ◽  
Alexander Peysakhovich ◽  
Eric Sodomka ◽  
Nicolas E. Stier-Moses

Computing market equilibria is an important practical problem for market design, for example, in fair division of items. However, computing equilibria requires large amounts of information, often the valuation of every buyer for every item, and computing power. In “Computing Large Market Equilibria Using Abstractions,” the authors study abstraction methods for ameliorating these issues. The basic abstraction idea is as follows. First, construct a coarsened abstraction of a given market, then solve for the equilibrium in the abstraction, and finally, lift the prices and allocations back to the original market. The authors show theoretical guarantees on the solution quality obtained via this approach. Then, two abstraction methods of interest for practitioners are introduced: (1) filling in unknown valuations using techniques from matrix completion and (2) reducing the problem size by aggregating groups of buyers/items into smaller numbers of representative buyers/items and solving for equilibrium in this coarsened market.


Sign in / Sign up

Export Citation Format

Share Document