Large-Scale Analog Circuit Evolutionary Design Using a Real-Coded Scheme

2012 ◽  
Vol 220-223 ◽  
pp. 2036-2039
Author(s):  
Su Min Jiao ◽  
Cai Hong Wang ◽  
Xue Mei Wang

Analog circuits are of great importance in electronic system design. Recent evolutionary design results are usually small-scale analog circuits. This paper proposes a real-coded mechanism and uses it in the large-scale analog circuit evolutionary design. The proposed scheme evolves the circuit topology and size to a uniformed continuous space, in which the circuit representation is closed and of causality. Experimental results show that the proposed scheme can work successfully on many analog circuits with different kinds of characteristics. Comparing with other evolutionary methods before, the proposed scheme performs better on large-scale problems of circuit synthesis with higher search efficiency, lower computational complexity, and less computing time.

Author(s):  
Alexander Zemliak

In this chapter, the problem of computer time reduction for optimization of large electronic system is discussed. It is one of the essential problems of high-quality improvement of design, and it is defined by means of the generalized methodology for analog network optimization on the basis of the control theory formulation. It is shown that the main conception of this methodology is based on a special control vector which operates process of optimization and gives a solution of optimization problem during the minimal computing time. The problem of creation of a vector of optimal control is solved on the basis of direct Lyapunov method. Lyapunov function of optimization process is proposed to analyze a stability of trajectories of optimization. This function gives the opportunity of the analysis of stability of various strategies and is used as the basis for search of the optimal algorithm of designing.


2018 ◽  
Vol 12 (3) ◽  
pp. 297-307 ◽  
Author(s):  
Takashi Tanizaki ◽  
Hideki Katagiri ◽  
António Oliveira Nzinga René ◽  
◽  

This paper proposes scheduling algorithms using metaheuristics for production processes in which cranes can interfere with each other. There are many production processes that involve cranes in manufacturing industry, such as in the steel industry, so a general purpose algorithm for this problem can be of practical use. The scheduling problem for this process is very complicated and difficult to solve because the cranes must avoid interfering with each other plus each machine has its own operational constraints. Although several algorithms have been proposed for a specific problem or small-scale problem, general purpose algorithms that can be solved in real time (about 30 minutes or less) in the company’s production planning work have not been developed for large-scale problems. This paper develops some metaheuristic algorithms to obtain suboptimal solutions in a short time, and it confirms their effectiveness through computer experiments.


Author(s):  
Rituraj Singh ◽  
Krishna M. Singh

In recent years, significant research effort has been invested in development of mesh-free methods for different types of continuum problems. Prominent amongst these methods are element free Galerkin (EFG) method, RKPM, and mesh-less Petrov Galerkin (MLPG) method. Most of these methods employ a set of nodes for discretization of the problem domain, and use a moving least squares (MLS) approximation to generate shape functions. Of these methods, MLPG method is seen as a pure meshless method since it does not require any background mesh. Accuracy and flexibility of MLPG method is well established for a variety of continuum problems. However, most of the applications have been limited to small scale problems solvable on serial machines. Very few attempts have been made to apply it to large scale problems which typically involve many millions (or even billions) of nodes and would require use of parallel algorithms based on domain decomposition. Such parallel techniques are well established in context of mesh-based methods. Extension of these algorithms in conjunction with MLPG method requires considerable further research. Objective of this paper is to spell out these challenges which need urgent attention to enable the application of meshless methods to large scale problems. We specifically address the issue of the solution of large scale linear problems which would necessarily require use of iterative solvers. We focus on application of BiCGSTAB method and an appropriate set of preconditioners for the solution of the MLPG system.


Geophysics ◽  
1984 ◽  
Vol 49 (10) ◽  
pp. 1675-1689 ◽  
Author(s):  
Derbew Messfin ◽  
Wooil Moon

This study investigates the feasibility of applying seismic techniques in the search for ore deposits, with particular emphasis given to locating orebodies at great depths. The basic procedure followed was essentially an understanding of the forward problem, whereby the effects of the subsurface structure in a typical mining district were thoroughly studied. The initial stage of the study was devoted to determining the elastic parameters by laboratory measurement of seismic velocities and densities of core samples obtained from the Sudbury basin, Canada. By virtue of its ability to handle lateral as well as vertical inhomogeneities, fast computing time and flexibility, the asymptotic ray theory was judged to be more suitable for studying the effect of geologic structures typically found in the Sudbury basin. Both large‐scale and small‐scale models, representing actual geologic conditions in Sudbury, were constructed. The computed seismic response of the large‐scale models shows that the micropegmatite/oxide‐rich quartz gabbro and the mafic norite/granite gneiss contacts are characterized by substantially strong reflections, indicating that these two interfaces can serve as marker horizons in future seismic surveys. In the small‐scale models of mineralized structures, the sulfide body was outlined by a distinctly high amplitude of reflection. Both the traveltime and the dynamic characteristics of these models have features that are indicative of the presence of mineralized structures.


1991 ◽  
Vol 01 (02) ◽  
pp. 149-176 ◽  
Author(s):  
KRZYSZTOF WAWRYN

This article deals with a new approach to an intelligent analog circuit design. The iterative closed loop design methodology adopts an expert system approach to provide topological synthesis, the SPICE circuit simulator to evaluate the circuit performance and a new approach of the diagnostic expert system to provide advice on how to improve the design. Unlike previous design methods, this approach introduces formal circuit representation for both numerical and heuristic knowledge of the design system. The predicate logic circuit representation is proposed to introduce a new concept of a formal analog circuit description language. The language syntax and semantics provide precise symbolic description of analog circuits functionality at different levels of hierarchy and connectivities together with transistor sizes of CMOS circuits at the transistor level. Different levels of hierarchy with circuit structures and performance parameters are presented in detail. It is shown how sentence conversion rules of language grammar can be used to derive transistor level circuits from input performance specifications through all intermediate levels of hierarchy. The implementation of the methodology and associated experimental results for CMOS operational amplifier designs are presented.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. A33-A38 ◽  
Author(s):  
Valeria Paoletti ◽  
Per Christian Hansen ◽  
Mads Friis Hansen ◽  
Maurizio Fedi

In potential-field inversion, careful management of singular value decomposition components is crucial for obtaining information about the source distribution with respect to depth. In principle, the depth-resolution plot provides a convenient visual tool for this analysis, but its computational complexity has hitherto prevented application to large-scale problems. To analyze depth resolution in such problems, we developed a variant ApproxDRP, which is based on an iterative algorithm and therefore suited for large-scale problems because we avoid matrix factorizations and the associated demands on memory and computing time. We used the ApproxDRP to study retrievable depth resolution in inversion of the gravity field of the Neapolitan Volcanic Area. Our main contribution is the combined use of the Lanczos bidiagonalization algorithm, established in the scientific computing community, and the depth-resolution plot defined in the geoscience community.


2018 ◽  
Vol 2018 ◽  
pp. 1-22 ◽  
Author(s):  
Ziyan Feng ◽  
Chengxuan Cao ◽  
Yutong Liu ◽  
Yaling Zhou

This paper focuses on the train routing problem at a high-speed railway station to improve the railway station capacity and operational efficiency. We first describe a node-based railway network by defining the turnout node and the arrival-departure line node for the mathematical formulation. Both considering potential collisions of trains and convenience for passengers’ transfer in the station, the train routing problem at a high-speed railway station is formulated as a multiobjective mixed integer nonlinear programming model, which aims to minimize trains’ departure time deviations and total occupation time of all tracks and keep the most balanced utilization of arrival-departure lines. Since massive decision variables for the large-scale real-life train routing problem exist, a fast heuristic algorithm is proposed based on the tabu search to solve it. Two sets of numerical experiments are implemented to demonstrate the rationality and effectiveness of proposed method: the small-scale case confirms the accuracy of the algorithm; the resulting heuristic proved able to obtain excellent solution quality within 254 seconds of computing time on a standard personal computer for the large-scale station involving up to 17 arrival-departure lines and 46 trains.


2020 ◽  
Vol 495 (4) ◽  
pp. 4463-4474
Author(s):  
J G Sorce

ABSTRACT Provided a random realization of the cosmological model, observations of our cosmic neighbourhood now allow us to build simulations of the latter down to the non-linear threshold. The resulting local Universe models are thus accurate up to a given residual cosmic variance. Namely some regions and scales are apparently not constrained by the data and seem purely random. Drawing conclusions together with their uncertainties involves then statistics implying a considerable amount of computing time. By applying the constraining algorithm to paired fixed fields, this paper diverts the original techniques from their first use to efficiently disentangle and estimate uncertainties on local Universe simulations obtained with random fields. Paired fixed fields differ from random realizations in the sense that their Fourier mode amplitudes are fixed and they are exactly out of phase. Constrained paired fixed fields show that only 20 per cent of the power spectrum on large scales (> tens of megaparsecs) is purely random. Namely 80 per cent of it is partly constrained by the large-scale/ small-scale data correlations. Additionally, two realizations of our local environment obtained with paired fixed fields of the same pair constitute an excellent non-biased average or quasi-linear realization of the latter, namely the equivalent of hundreds of constrained simulations. The variance between these two realizations gives the uncertainty on the achievable local Universe simulations. These two simulations will permit enhancing faster our local cosmic web understanding thanks to a drastically reduced required computational time to appreciate its modelling limits and uncertainties.


2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Yueyue Liu ◽  
Rui Zhang ◽  
Miaomiao Wang ◽  
Xiaoxi Zhu

This paper studies a production scheduling problem with deteriorating jobs, which frequently arises in contemporary manufacturing environments. The objective is to find an optimal sequence of the set of jobs to minimize the total weighted tardiness, which is an indicator of service quality. The problem belongs to the class of NP-hard. When the number of jobs increases, the computational time required by an optimization algorithm to solve the problem will increase exponentially. To tackle large-scale problems efficiently, a two-stage method is presented in this paper. We partition the set of jobs into a few subsets by applying a neural network approach and thereby transform the large-scale problem into a series of small-scale problems. Then, we employ an improved metaheuristic algorithm (called GTS) which combines genetic algorithm with tabu search to find the solution for each subproblem. Finally, we integrate the obtained sequences for each subset of jobs and produce the final complete solution by enumeration. A fair comparison has been made between the two-stage method and the GTS without decomposition, and the experimental results show that the solution quality of the two-stage method is much better than that of GTS for large-scale problems.


Sign in / Sign up

Export Citation Format

Share Document