scholarly journals Application of Supply-Demand-Based Optimization for Parameter Extraction of Solar Photovoltaic Models

Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Guojiang Xiong ◽  
Jing Zhang ◽  
Dongyuan Shi ◽  
Xufeng Yuan

Modeling solar photovoltaic (PV) systems accurately is based on optimal values of unknown model parameters of PV cells and modules. In recent years, the use of metaheuristics for parameter extraction of PV models gains more and more attentions thanks to their efficacy in solving highly nonlinear multimodal optimization problems. This work addresses a novel application of supply-demand-based optimization (SDO) to extract accurate and reliable parameters for PV models. SDO is a very young and efficient metaheuristic inspired by the supply and demand mechanism in economics. Its exploration and exploitation are balanced well by incorporating different dynamic modes of the cobweb model organically. To validate the feasibility and effectiveness of SDO, four PV models with diverse characteristics including RTC France silicon solar cell, PVM 752 GaAs thin film cell, STM6-40/36 monocrystalline module, and STP6-120/36 polycrystalline module are employed. The experimental results comparing with ten state-of-the-art algorithms demonstrate that SDO performs better or highly competitively in terms of accuracy, robustness, and convergence. In addition, the sensitivity of SDO to variation of population size is empirically investigated. The results indicate that SDO with a relatively small population size can extract accurate and reliable parameters for PV models.

2019 ◽  
Vol 11 (23) ◽  
pp. 2795 ◽  
Author(s):  
Guojiang Xiong ◽  
Jing Zhang ◽  
Dongyuan Shi ◽  
Lin Zhu ◽  
Xufeng Yuan ◽  
...  

Extracting accurate values for involved unknown parameters of solar photovoltaic (PV) models is very important for modeling PV systems. In recent years, the use of metaheuristic algorithms for this problem tends to be more popular and vibrant due to their efficacy in solving highly nonlinear multimodal optimization problems. The whale optimization algorithm (WOA) is a relatively new and competitive metaheuristic algorithm. In this paper, an improved variant of WOA referred to as MCSWOA, is proposed to the parameter extraction of PV models. In MCSWOA, three improved components are integrated together: (i) Two modified search strategies named WOA/rand/1 and WOA/current-to-best/1 inspired by differential evolution are designed to balance the exploration and exploitation; (ii) a crossover operator based on the above modified search strategies is introduced to meet the search-oriented requirements of different dimensions; and (iii) a selection operator instead of the “generate-and-go” operator used in the original WOA is employed to prevent the population quality getting worse and thus to guarantee the consistency of evolutionary direction. The proposed MCSWOA is applied to five PV types. Both single diode and double diode models are used to model these five PV types. The good performance of MCSWOA is verified by various algorithms.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 390 ◽  
Author(s):  
Ahmad Hassanat ◽  
Khalid Almohammadi ◽  
Esra’a Alkafaween ◽  
Eman Abunawas ◽  
Awni Hammouri ◽  
...  

Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested.


2020 ◽  
Vol 4 (11) ◽  
pp. 5595-5608
Author(s):  
Guojiang Xiong ◽  
Jing Zhang ◽  
Dongyuan Shi ◽  
Lin Zhu ◽  
Xufeng Yuan

The parameter extraction problem of solar photovoltaic (PV) models is a highly nonlinear multimodal optimization problem. In this paper, quadratic interpolation learning differential evolution (QILDE) is proposed to solve it.


2016 ◽  
Author(s):  
Arya Iranmehr ◽  
Ali Akbari ◽  
Christian Schlötterer ◽  
Vineet Bafna

AbstractThe advent of next generation sequencing technologies has made whole-genome and whole-population sampling possible, even for eukaryotes with large genomes. With this development, experimental evolution studies can be designed to observe molecular evolution “in-action” via Evolve-and-Resequence (E&R) experiments. Among other applications, E&R studies can be used to locate the genes and variants responsible for genetic adaptation. Existing literature on time-series data analysis often assumes large population size, accurate allele frequency estimates, and wide time spans. These assumptions do not hold in many E&R studies.In this article, we propose a method-Composition of Likelihoods for Evolve-And-Resequence experiments (Clear)–to identify signatures of selection in small population E&R experiments. Clear takes whole-genome sequence of pool of individuals (pool-seq) as input, and properly addresses heterogeneous ascertainment bias resulting from uneven coverage. Clear also provides unbiased estimates of model parameters, including population size, selection strength and dominance, while being computationally efficient. Extensive simulations show that Clear achieves higher power in detecting and localizing selection over a wide range of parameters, and is robust to variation of coverage. We applied Clear statistic to multiple E&R experiments, including, data from a study of D. melanogaster adaptation to alternating temperatures and a study of outcrossing yeast populations, and identified multiple regions under selection with genome-wide significance.


Author(s):  
Richard Frankham ◽  
Jonathan D. Ballou ◽  
Katherine Ralls ◽  
Mark D. B. Eldridge ◽  
Michele R. Dudash ◽  
...  

Genetic management of fragmented populations involves the application of evolutionary genetic theory and knowledge to alleviate problems due to inbreeding and loss of genetic diversity in small population fragments. Populations evolve through the effects of mutation, natural selection, chance (genetic drift) and gene flow (migration). Large outbreeding, sexually reproducing populations typically contain substantial genetic diversity, while small populations typically contain reduced levels. Genetic impacts of small population size on inbreeding, loss of genetic diversity and population differentiation are determined by the genetically effective population size, which is usually much smaller than the number of individuals.


Author(s):  
Po Ting Lin ◽  
Wei-Hao Lu ◽  
Shu-Ping Lin

In the past few years, researchers have begun to investigate the existence of arbitrary uncertainties in the design optimization problems. Most traditional reliability-based design optimization (RBDO) methods transform the design space to the standard normal space for reliability analysis but may not work well when the random variables are arbitrarily distributed. It is because that the transformation to the standard normal space cannot be determined or the distribution type is unknown. The methods of Ensemble of Gaussian-based Reliability Analyses (EoGRA) and Ensemble of Gradient-based Transformed Reliability Analyses (EGTRA) have been developed to estimate the joint probability density function using the ensemble of kernel functions. EoGRA performs a series of Gaussian-based kernel reliability analyses and merged them together to compute the reliability of the design point. EGTRA transforms the design space to the single-variate design space toward the constraint gradient, where the kernel reliability analyses become much less costly. In this paper, a series of comprehensive investigations were performed to study the similarities and differences between EoGRA and EGTRA. The results showed that EGTRA performs accurate and effective reliability analyses for both linear and nonlinear problems. When the constraints are highly nonlinear, EGTRA may have little problem but still can be effective in terms of starting from deterministic optimal points. On the other hands, the sensitivity analyses of EoGRA may be ineffective when the random distribution is completely inside the feasible space or infeasible space. However, EoGRA can find acceptable design points when starting from deterministic optimal points. Moreover, EoGRA is capable of delivering estimated failure probability of each constraint during the optimization processes, which may be convenient for some applications.


Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3625
Author(s):  
Mateusz Krzysztoń ◽  
Ewa Niewiadomska-Szynkiewicz

Intelligent wireless networks that comprise self-organizing autonomous vehicles equipped with punctual sensors and radio modules support many hostile and harsh environment monitoring systems. This work’s contribution shows the benefits of applying such networks to estimate clouds’ boundaries created by hazardous toxic substances heavier than air when accidentally released into the atmosphere. The paper addresses issues concerning sensing networks’ design, focussing on a computing scheme for online motion trajectory calculation and data exchange. A three-stage approach that incorporates three algorithms for sensing devices’ displacement calculation in a collaborative network according to the current task, namely exploration and gas cloud detection, boundary detection and estimation, and tracking the evolving cloud, is presented. A network connectivity-maintaining virtual force mobility model is used to calculate subsequent sensor positions, and multi-hop communication is used for data exchange. The main focus is on the efficient tracking of the cloud boundary. The proposed sensing scheme is sensitive to crucial mobility model parameters. The paper presents five procedures for calculating the optimal values of these parameters. In contrast to widely used techniques, the presented approach to gas cloud monitoring does not calculate sensors’ displacements based on exact values of gas concentration and concentration gradients. The sensor readings are reduced to two values: the gas concentration below or greater than the safe value. The utility and efficiency of the presented method were justified through extensive simulations, giving encouraging results. The test cases were carried out on several scenarios with regular and irregular shapes of clouds generated using a widely used box model that describes the heavy gas dispersion in the atmospheric air. The simulation results demonstrate that using only a rough measurement indicating that the threshold concentration value was exceeded can detect and efficiently track a gas cloud boundary. This makes the sensing system less sensitive to the quality of the gas concentration measurement. Thus, it can be easily used to detect real phenomena. Significant results are recommendations on selecting procedures for computing mobility model parameters while tracking clouds with different shapes and determining optimal values of these parameters in convex and nonconvex cloud boundaries.


Author(s):  
Madoka Muroishi ◽  
Akira Yakita

AbstractUsing a small, open, two-region economy model populated by two-period-lived overlapping generations, we analyze long-term agglomeration economy and congestion diseconomy effects of young worker concentration on migration and the overall fertility rate. When the migration-stability condition is satisfied, the distribution of young workers between regions is obtainable in each period for a predetermined population size. Results show that migration stability does not guarantee dynamic stability of the economy. The stationary population size stability depends on the model parameters and the initial population size. On a stable trajectory converging to the stationary equilibrium, the overall fertility rate might change non-monotonically with the population size of the economy because of interregional migration. In each period, interregional migration mitigates regional population changes caused by fertility differences on the stable path. Results show that the inter-regional migration-stability condition does not guarantee stability of the population dynamics of the economy.


Genetics ◽  
2001 ◽  
Vol 157 (4) ◽  
pp. 1773-1787 ◽  
Author(s):  
Bruno Bost ◽  
Dominique de Vienne ◽  
Frédéric Hospital ◽  
Laurence Moreau ◽  
Christine Dillmann

Abstract The L-Shaped distribution of estimated QTL effects (R2) has long been reported. We recently showed that a metabolic mechanism could account for this phenomenon. But other nonexclusive genetic or nongenetic causes may contribute to generate such a distribution. Using analysis and simulations of an additive genetic model, we show that linkage disequilibrium between QTL, low heritability, and small population size may also be involved, regardless of the gene effect distribution. In addition, a comparison of the additive and metabolic genetic models revealed that estimates of the QTL effects for traits proportional to metabolic flux are far less robust than for additive traits. However, in both models the highest R2's repeatedly correspond to the same set of QTL.


Sign in / Sign up

Export Citation Format

Share Document