scholarly journals Using a Hybrid Evolutionary Algorithm for Solving Signal Transmission Station Location and Allocation Problem with Different Regional Communication Quality Restriction

2020 ◽  
Vol 10 (3) ◽  
pp. 165-178 ◽  
Author(s):  
Ta-Cheng Chen ◽  
Sheng-Chuan Wang ◽  
Wen-Cheng Tseng

This study aims to investigate the signal transmission station location-allocation problems with the various restricted regional constraints. In each constraint, the types of signal transmission stations and the corresponding numbers and locations are to be decided at the same time. Inappropriate set up of stations is not only causing the unnecessary cost but also making the poor service quality. In this study, we proposed a hybrid evolutionary approach integrating the immune algorithm with particle swarm optimization (IAPSO) to solve this problem where each of the regions is with different maximum failure rate restrictions. We compared the performance of the proposed method with commercial optimization software LINGO®. According to the experimental results, solutions obtained by our IAPSO are better than or as well as the best solutions obtained by LINGO®. It is expected that our research can provide the telecommunication enterprise the optimal/near-optimal strategies for the setup of signal transmission stations.

2011 ◽  
Vol 130-134 ◽  
pp. 2504-2507 ◽  
Author(s):  
Qi Tang ◽  
Li Xin Tang

This paper considers Integration of batching and scheduling with features on batch scheduling such as multi-product facilities, multi-stage, set-up time depending on sequence, limited storage. For this complicated problem, we present a Conjunctive model and Heuristic Particle Swarm Optimization. We proposed Local Search (LS) Strategy to improve the HPSO. Computations show that both methods are efficient. In addition, HPSO are better than CM.


1989 ◽  
Vol 54 (7) ◽  
pp. 1785-1794 ◽  
Author(s):  
Vlastimil Kubáň ◽  
Josef Komárek ◽  
Zbyněk Zdráhal

A FIA-FAAS apparatus containing a six-channel sorption equipment with five 3 x 26 mm microcolumns packed with Spheron Oxin 1 000, Ostsorb Oxin and Ostsorb DTTA was set up. Combined with sorption from 0.002M acetate buffer at pH 4.2 and desorption with 2M-HCl, copper can be determined at concentrations up to 100, 150 and 200 μg l-1, respectively. For sample and eluent flow rates of 5.0 and 4.0 ml min-1, respectively, and a sample injection time of 5 min, the limit of copper determination is LQ = 0.3 μg l-1, repeatability sr is better than 2% and recovery is R = 100 ± 2%. The enrichment factor is on the order of 102 and is a linear function of time (volume) of sample injection up to 5 min and of the sample injection flow rate up to 11 ml min-1 for Spheron Oxin 1 000 and Ostsorb DTTA. For times of sorption of 60 and 300 s, the sampling frequency is 70 and 35 samples/h, respectively. The parameters of the FIA-FAAS determination (acetylene-air flame) are comparable to or better than those achieved by ETA AAS. The method was applied to the determination of traces of copper in high-purity water.


Author(s):  
Guangyu Zhou ◽  
Aijia Ouyang ◽  
Yuming Xu

To overcome the shortcomings of the basic glowworm swarm optimization (GSO) algorithm, such as low accuracy, slow convergence speed and easy to fall into local minima, chaos algorithm and cloud model algorithm are introduced to optimize the evolution mechanism of GSO, and a chaos GSO algorithm based on cloud model (CMCGSO) is proposed in the paper. The simulation results of benchmark function of global optimization show that the CMCGSO algorithm performs better than the cuckoo search (CS), invasive weed optimization (IWO), hybrid particle swarm optimization (HPSO), and chaos glowworm swarm optimization (CGSO) algorithm, and CMCGSO has the advantages of high accuracy, fast convergence speed and strong robustness to find the global optimum. Finally, the CMCGSO algorithm is used to solve the problem of face recognition, and the results are better than the methods from literatures.


1988 ◽  
Vol 10 (1) ◽  
pp. 37-42 ◽  
Author(s):  
M. J. Wheeler ◽  
Linzi Waiters

The Kemtek 1000 Sample Processor has been evaluated for precision, accuracy, speed and reliability. Precision was better than 1.0% at all volumes tested and accuracy within ±5%. A l00-tube assay could be set up within 15 min when patient specimens plus two reagents were sampled using a two probe system. Carry-over could be reduced to <0.01% by using a sufficient number of wash steps, the latter being related to the assay requirements. Evidence was found for adsorption of protein to the probe tubing but inaccuracies due to this could be reduced by introducing wash steps between samples. Problems over 12 months have been minor and quickly resolved. The authors were pleased with the way the processor performed and their staffhave confidence in leaving it to set up their assays.


1980 ◽  
Vol 85 (1) ◽  
pp. 21-31
Author(s):  
U. ACHENBACH ◽  
K. E. WOHLFARTH-BOTTERMANN

A new experimental investigation chamber was used to analyse the control of rhythmic contractile activity in Physarum. A strand was mounted in such a way that isometric tension measurements of contraction forces could be made on two regions independently, the two regions remaining connected. It was possible to disturb one region experimentally and to compare its behaviour with the other. A short time after being set up in the apparatus, the isometric contraction cycles in the two regions became synchronous. Stretching one region by 50% of its original length induced a phase delay relative to the other. A brief unilateral cold shock (Δt = 5舑15 °C) had a similar phase-retarding effect. Synchrony was subsequently reattained, unless the connecting region was cut or, for example, treated with 30 mM benzamide. In approximately 25% of the investigated strands, a rapid change to a higher temperature (Δt = 2舑5 °C) caused the warmed side to be phase-advanced. However, 75% of the strands did not show a phase shift, suggesting that a rapid phase regulation is supported by increased temperature. The described experimental assay is suitable for analysing the pathway and the nature of signal transmission in plasmodial strands. Note: Partly presented at the International Titisee-Conference on Cellular Oscillators, 22舑24 March 1979 (see J. exp. Biol. (1979)).


2018 ◽  
Vol 22 (8) ◽  
pp. 4425-4447 ◽  
Author(s):  
Manuel Antonetti ◽  
Massimiliano Zappa

Abstract. Both modellers and experimentalists agree that using expert knowledge can improve the realism of conceptual hydrological models. However, their use of expert knowledge differs for each step in the modelling procedure, which involves hydrologically mapping the dominant runoff processes (DRPs) occurring on a given catchment, parameterising these processes within a model, and allocating its parameters. Modellers generally use very simplified mapping approaches, applying their knowledge in constraining the model by defining parameter and process relational rules. In contrast, experimentalists usually prefer to invest all their detailed and qualitative knowledge about processes in obtaining as realistic spatial distribution of DRPs as possible, and in defining narrow value ranges for each model parameter.Runoff simulations are affected by equifinality and numerous other uncertainty sources, which challenge the assumption that the more expert knowledge is used, the better will be the results obtained. To test for the extent to which expert knowledge can improve simulation results under uncertainty, we therefore applied a total of 60 modelling chain combinations forced by five rainfall datasets of increasing accuracy to four nested catchments in the Swiss Pre-Alps. These datasets include hourly precipitation data from automatic stations interpolated with Thiessen polygons and with the inverse distance weighting (IDW) method, as well as different spatial aggregations of Combiprecip, a combination between ground measurements and radar quantitative estimations of precipitation. To map the spatial distribution of the DRPs, three mapping approaches with different levels of involvement of expert knowledge were used to derive so-called process maps. Finally, both a typical modellers' top-down set-up relying on parameter and process constraints and an experimentalists' set-up based on bottom-up thinking and on field expertise were implemented using a newly developed process-based runoff generation module (RGM-PRO). To quantify the uncertainty originating from forcing data, process maps, model parameterisation, and parameter allocation strategy, an analysis of variance (ANOVA) was performed.The simulation results showed that (i) the modelling chains based on the most complex process maps performed slightly better than those based on less expert knowledge; (ii) the bottom-up set-up performed better than the top-down one when simulating short-duration events, but similarly to the top-down set-up when simulating long-duration events; (iii) the differences in performance arising from the different forcing data were due to compensation effects; and (iv) the bottom-up set-up can help identify uncertainty sources, but is prone to overconfidence problems, whereas the top-down set-up seems to accommodate uncertainties in the input data best. Overall, modellers' and experimentalists' concept of model realism differ. This means that the level of detail a model should have to accurately reproduce the DRPs expected must be agreed in advance.


2009 ◽  
Vol 05 (02) ◽  
pp. 487-496 ◽  
Author(s):  
WEI FANG ◽  
JUN SUN ◽  
WENBO XU

Mutation operator is one of the mechanisms of evolutionary algorithms (EAs) and it can provide diversity in the search and help to explore the undiscovered search place. Quantum-behaved particle swarm optimization (QPSO), which is inspired by fundamental theory of PSO algorithm and quantum mechanics, is a novel stochastic searching technique and it may encounter local minima problem when solving multi-modal problems just as that in PSO. A novel mutation mechanism is proposed in this paper to enhance the global search ability of QPSO and a set of different mutation operators is introduced and implemented on the QPSO. Experiments are conducted on several well-known benchmark functions. Experimental results show that QPSO with some of the mutation operators is proven to be statistically significant better than the original QPSO.


2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Ali R. Guner ◽  
Mehmet Sevkli

A discrete version of particle swarm optimization (DPSO) is employed to solve uncapacitated facility location (UFL) problem which is one of the most widely studied in combinatorial optimization. In addition, a hybrid version with a local search is defined to get more efficient results. The results are compared with a continuous particle swarm optimization (CPSO) algorithm and two other metaheuristics studies, namely, genetic algorithm (GA) and evolutionary simulated annealing (ESA). To make a reasonable comparison, we applied to same benchmark suites that are collected from OR-library. In conclusion, the results showed that DPSO algorithm is slightly better than CPSO algorithm and competitive with GA and ESA.


Author(s):  
Ali Najim Abdullah ◽  
Ahmed Majeed Ghadhban ◽  
Hayder Salim Hameed ◽  
Husham Idan Hussein

<p><span>This paper proposes a steady-state of the Static Var Compensator (SVC) &amp; Thyristor Controlled Series Capacitor (TCSC) set up for enhancing the damping overall performance and growing the integral clearing time (CCT) of a power network. The indispensable clearing time is carried out through increasing the time fault interval until the gadget loses stability. Increasing the CCT can be contribute to reliability of the safety gadget, decrease the protection machine ranking and cost. In order to attain most enhancement of machine stability via optimizing location, sizing and control modes of SVC and TCSC. Models and methodology for putting and designing shunt FACT’s units SVC (injected reactive strength Q) and series FACT’s devices TCSC (chose capacitive region) are examined in a 6-bus system. Performance factors are described to show validation of SVC and TCSC on extraordinary conditions. It is proven that the SVC is better than TCSC. </span></p>


Author(s):  
Shilpa Deo*

The Government of India has been taking various steps towards identification of the poor (and vulnerable through the Socio Economic Caste Census) and measurement of poverty with the help of various Expert Groups right from the Task Force that was set up in 1962 to the Task Force on Poverty Elimination of the NITI Aayog. There have been many researchers as well who have been suggesting the ways in which the poor and vulnerable can be identified and poverty can be measured besides the suggestions given by the Expert Groups. However, it may be considered as a ‘national shame’ if we are unable to identify the needy even after 75 years of independence. Through the review of around 100 books, research papers and articles, an attempt has been to understand the strengths and shortcomings of suggested ways to identify the poor and vulnerable and suggest a comprehensive methodology to identify the needy. Unless we are able to identify the poor and vulnerable sections of society correctly, planning and implementing poverty alleviation programmes for “ending poverty in all its forms everywhere”1 would be a futile exercise!


Sign in / Sign up

Export Citation Format

Share Document