Towards Rapid Redesign: Decomposition Patterns for Large-Scale and Complex Redesign Problems

Author(s):  
Li Chen ◽  
Simon Li ◽  
Ashish Macwan

In an effort to develop a decomposition-based rapid redesign methodology, this paper introduces the basis of such a methodology on decomposition patterns for a general redesign problem that is computation-intensive and simulation-complex. In particular, through pattern representation and quantification, this paper elaborates the role and utility of the decomposition patterns in decomposition-based rapid redesign. In pattern representation, it shows how a decomposition pattern can be used to capture and portray the intrinsic properties of a redesign problem. Thus, through pattern synthesis, the collection of proper decomposition patterns allows one to effectively represent in a concise form the complete body of redesign knowledge covering all redesign problem types. In pattern quantification, it shows how a decomposition pattern can be used to extract and convey the quantum information of a redesign problem using the pattern characteristics. Thus, through pattern analysis, the formulation of an index incorporating two redesign metrics allows one to efficiently predict in a simple manner the amount of potential redesign effort for a given redesign problem. This work represents a breakthrough in extending the decomposition-based solution approach to computational redesign problems.

2021 ◽  
Vol 13 (9) ◽  
pp. 5108
Author(s):  
Navin Ranjan ◽  
Sovit Bhandari ◽  
Pervez Khan ◽  
Youn-Sik Hong ◽  
Hoon Kim

The transportation system, especially the road network, is the backbone of any modern economy. However, with rapid urbanization, the congestion level has surged drastically, causing a direct effect on the quality of urban life, the environment, and the economy. In this paper, we propose (i) an inexpensive and efficient Traffic Congestion Pattern Analysis algorithm based on Image Processing, which identifies the group of roads in a network that suffers from reoccurring congestion; (ii) deep neural network architecture, formed from Convolutional Autoencoder, which learns both spatial and temporal relationships from the sequence of image data to predict the city-wide grid congestion index. Our experiment shows that both algorithms are efficient because the pattern analysis is based on the basic operations of arithmetic, whereas the prediction algorithm outperforms two other deep neural networks (Convolutional Recurrent Autoencoder and ConvLSTM) in terms of large-scale traffic network prediction performance. A case study was conducted on the dataset from Seoul city.


2018 ◽  
Author(s):  
Bret Shandro ◽  
Pascal Haegeli

Abstract. The snow and avalanche climate types maritime, continental and transitional are well established and have been used extensively to characterize the general nature of avalanche hazard at a location, study interseasonal and large-scale spatial variabilities and provide context for the design of avalanche safety operations. While researchers and practitioners have an experience-based understanding of the avalanche hazard associated with the three climate types, no studies have described the hazard character of an avalanche climate in detail. Since the 2009/10 winter, the consistent use of Statham et al.'s (2017) conceptual model of avalanche hazard in public avalanche bulletins in Canada created a new quantitative record of avalanche hazard that offers novel opportunities for addressing this knowledge gap. We identified typical daily avalanche hazard situations using Self Organizing Maps (SOM) and then calculated seasonal prevalence values of these situations. This approach produces a concise characterization measure that is conducive to statistical analyses, but still provides a comprehensive picture that is informative for avalanche risk management due to its link to avalanche problem types. Hazard situation prevalence values for individual seasons, elevations bands and forecast regions provide unprecedented insight into the interseasonal and spatial variability of avalanche hazard in western Canada.


2017 ◽  
Vol 5 (38) ◽  
pp. 20277-20288 ◽  
Author(s):  
Yong Li ◽  
Zhaozhu Zhang ◽  
Mengke Wang ◽  
Xuehu Men ◽  
Qunji Xue

Repairable and antifouling coatings were prepared via self-assembly method without destroying the intrinsic properties of substrates, which aims to tackle low transparency and poor durability problems of current coatings.


2015 ◽  
Vol 4 (3) ◽  
pp. 1 ◽  
Author(s):  
P. R. Wilson

Fault Current Limiters are used in a wide array of applications from small circuit protection at low power levels to large scale high power applications which require superconductors and complex control circuitry. One advantage of  passive fault current limiters (FCL) is the automatic behavior that is dependent on the intrinsic properties of the circuit elements rather than on a complex feedback control scheme making this approach attractive for low cost applications and also where reliability is critical. This paper describes the behavioral modeling of a passive Magnetic FCL and its potential application in practical circuits.


2020 ◽  
pp. 258-270
Author(s):  
Gershon Kurizki ◽  
Goren Gordon

Henry and Eve have finally tested their quantum computer (QC) with resounding success! It may enable much faster and better modelling of complex pharmaceutical designs, long-term weather forecasts or brain process simulations than classical computers. A 1,000-qubit QC can process in a single step 21000 possible superposition states: its speedup is exponential in the number of qubits. Yet this wondrous promise requires overcoming the enormous hurdle of decoherence, which is why progress towards a large-scale QC has been painstakingly slow. To their dismay, their QC is “expropriated for the quantum revolution” in order to share quantum information among all mankind and thus impose a collective entangled state of mind. They set out to foil this totalitarian plan and restore individuality by decohering the quantum information channel. The appendix to this chapter provide a flavor of QC capabilities through a quantum algorithm that can solve problems exponentially faster than classical computers.


2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Binbin Xiang ◽  
Congsi Wang ◽  
Peiyuan Lian

In this paper, a method based on the Zernike distribution and the optical aberration is proposed to investigate the effects of the distribution characteristics of surface distortions of a reflector antenna on its electromagnetic performance (EMP). For large-scale errors, an analytical model is introduced to describe the arbitrary distortions, based on the orthogonal Zernike polynomials. The effects of error distribution described by the Zernike series on typical EMP are analyzed. The numerical results indicate that the distortions in the distribution feature of defocus or spherical aberration have a greater impact on gain, and the distribution feature of tilt or coma mainly influences boresight offset; the distribution feature of defocus, astigmatism, and spherical aberration has a greater impact on sidelobe levels. The results indicate that the beam contour patterns are related to the distribution forms of distortions and are similar with the same aberration feature. On the basis of the Seidel aberration, the relationships between typical EMP and the aberration coefficient are presented. Based on these, the error profile of the primary influence and the approximate feature of EMP can be determined, and the antenna performance can be predicted in a simple manner.


2020 ◽  
Vol 12 (2) ◽  
pp. 581 ◽  
Author(s):  
Yiyong Xiao ◽  
Pei Yang ◽  
Siyue Zhang ◽  
Shenghan Zhou ◽  
Wenbing Chang ◽  
...  

This paper studies the cyclic dynamic gaming case of the r-interdiction median problem with fortification (CDGC-RIMF), which is important for strengthening a facility’s reliability and invulnerability under various possible attacks. We formulated the CDGC-RIMF as a bi-objective mixed-integer linear programming (MILP) model with two opposing goals to minimize/maximize the loss from both the designer (leader) and attacker (follower) sides. The first goal was to identify the most cost-effective plan to build and fortify the facility considering minimum loss, whereas the attacker followed the designer to seek the most destructive way of attacking to cause maximum loss. We found that the two sides could not reach a static equilibrium with a single pair of confrontational plans in an ordinary case, but were able to reach a dynamically cyclic equilibrium when the plan involved multiple pairs. The proposed bi-objective model aimed to discover the optimal cyclic plans for both sides to reach a dynamic equilibrium. To solve this problem, we first started from the designer’s side with a design and fortification plan, and then the attacker was able to generate their worst attack plan based on that design. After that, the designer changed their plan again based on the attacker’s plan in order to minimize loss, and the attacker correspondingly modified their plan to achieve maximum loss. This game looped until, finally, a cyclic equilibrium was reached. This equilibrium was deemed to be optimal for both sides because there was always more loss for either side if they left the equilibrium first. This game falls into the subgame of a perfect Nash equilibrium—a kind of complete game. The proposed bi-objective model was directly solved by the CPLEX solver to achieve optimal solutions for small-sized problems and near-optimal feasible solutions for larger-sized problems. Furthermore, for large-scale problems, we developed a heuristic algorithm that implemented dynamic iterative partial optimization alongside MILP (DIPO-MILP), which showed better performance compared with the CPLEX solver when solving large-scale problems.


Author(s):  
Li Chen ◽  
Ashish Macwan

This paper presents our continued research efforts towards developing a decomposition-based solution approach for rapid computational redesign to support agile manufacturing of evolutionary products. By analogy to the practices used for physical machines, the proposed approach involves two general steps: diagnosis and repair. This paper focuses on the diagnosis step. for which a two-phase decomposition method is developed. The first phase, called design dependency analysis, systematizes and reorganizes the intrinsic coupling structure of the existing design model by analyzing and reordering the design dependency matrix (DDM) used to represent the functional dependence and couplings inherent in the design model. The second phase, called redesign partitioning analysis, uses this result to generate alternative redesign pattern solutions through a three-stage procedure. Each pattern solution delimits the portions of the design model that need to be re-computed. An example problem concerning the redesign of an automobile powertrain is used for method illustration. Our seed paper has presented a method for selecting the optimal redesign pattern solution from the alternatives generated through redesign partitioning analysis, and a sequel paper will discuss how to generate a corresponding re-computation strategy and redesign plan (redesign shortcut roadmap).


Universe ◽  
2019 ◽  
Vol 5 (4) ◽  
pp. 92 ◽  
Author(s):  
Jérôme Martin

According to the theory of cosmic inflation, the large scale structures observed in our Universe (galaxies, clusters of galaxies, Cosmic Background Microwave—CMB—anisotropy...) are of quantum mechanical origin. They are nothing but vacuum fluctuations, stretched to cosmological scales by the cosmic expansion and amplified by gravitational instability. At the end of inflation, these perturbations are placed in a two-mode squeezed state with the strongest squeezing ever produced in Nature (much larger than anything that can be made in the laboratory on Earth). This article studies whether astrophysical observations could unambiguously reveal this quantum origin by borrowing ideas from quantum information theory. It is argued that some of the tools needed to carry out this task have been discussed long ago by J. Bell in a, so far, largely unrecognized contribution. A detailled study of his paper and of the criticisms that have been put forward against his work is presented. Although J. Bell could not have realized it when he wrote his letter since the quantum state of cosmological perturbations was not yet fully characterized at that time, it is also shown that Cosmology and cosmic inflation represent the most interesting frameworks to apply the concepts he investigated. This confirms that cosmic inflation is not only a successful paradigm to understand the early Universe. It is also the only situation in Physics where one crucially needs General Relativity and Quantum Mechanics to derive the predictions of a theory and, where, at the same time, we have high-accuracy data to test these predictions, making inflation a playground of utmost importance to discuss foundational issues in Quantum Mechanics.


Sign in / Sign up

Export Citation Format

Share Document