Parameter Estimation in Modeling of Photovoltaic Panels Based on Datasheet Values

2013 ◽  
Vol 136 (2) ◽  
Author(s):  
Hayrettin Can ◽  
Damla Ickilli

The increasing demand for renewable energy sources in recent years has triggered technological advancements in photovoltaic (PV) panels. Widespread use of large-scale PV units has revealed the sensitivity in PV panel modeling needed to estimate the amount of energy produced in different environmental conditions. For the formation of a PV panel model, parameters should be obtained by numerically solving characteristic equations of a transcendental nature. This study preferred using the Newton Raphson (NR) method owing to the suitability of equation structure. It is crucial that numerical solution starts with proper initial values. This study proposes a new approach to identifying initial values in order to decrease calculation time and increase the speed of numerical convergence. The proposed method was used in parameter estimation for different panel models. And, it was observed that owing to this method, the system converged with less iteration and the problem of failing to solve the system because of inappropriate initial values were eliminated. Convergence was obtained and the solution needed less iteration in all models.

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4638
Author(s):  
Simon Pratschner ◽  
Pavel Skopec ◽  
Jan Hrdlicka ◽  
Franz Winter

A revolution of the global energy industry is without an alternative to solving the climate crisis. However, renewable energy sources typically show significant seasonal and daily fluctuations. This paper provides a system concept model of a decentralized power-to-green methanol plant consisting of a biomass heating plant with a thermal input of 20 MWth. (oxyfuel or air mode), a CO2 processing unit (DeOxo reactor or MEA absorption), an alkaline electrolyzer, a methanol synthesis unit, an air separation unit and a wind park. Applying oxyfuel combustion has the potential to directly utilize O2 generated by the electrolyzer, which was analyzed by varying critical model parameters. A major objective was to determine whether applying oxyfuel combustion has a positive impact on the plant’s power-to-liquid (PtL) efficiency rate. For cases utilizing more than 70% of CO2 generated by the combustion, the oxyfuel’s O2 demand is fully covered by the electrolyzer, making oxyfuel a viable option for large scale applications. Conventional air combustion is recommended for small wind parks and scenarios using surplus electricity. Maximum PtL efficiencies of ηPtL,Oxy = 51.91% and ηPtL,Air = 54.21% can be realized. Additionally, a case study for one year of operation has been conducted yielding an annual output of about 17,000 t/a methanol and 100 GWhth./a thermal energy for an input of 50,500 t/a woodchips and a wind park size of 36 MWp.


2020 ◽  
Author(s):  
Ryma Aissat ◽  
Alexandre Pryet ◽  
Marc Saltel ◽  
Alain Dupuy

<p>Large scale, physically-based groundwater models have been used for many years for water resources management and decision-support. Improving the accuracy and reliability of these models is a constant objective. The characterization of model parameters, in particular hydraulic properties, which are spatially heterogeneous is a challenge. Parameter estimation algorithms can now manage numerous model runs in parallel, but the operation remains, in practice, largely constrained by the computational burden. A large-scale model of the sedimentary, multilayered aquifer system of North Aquitania (MONA), in South-West France, developed by the French Geological Survey (BRGM) is used here to illustrate the case. We focus on the estimation of distributed parameters and investigate the optimum parameterization given the level of spatial heterogeneity we aim to characterize, available observations, model run time, and computational resources. Hydraulic properties are estimated with pilot points. Interpolation is conducted by kriging, the variogram range and pilot point density are set given modeling purposes and a series of constraints. The popular gradient-based parameter estimation methods such as the Gauss–Marquard–Levenberg algorithm (GLMA) are conditioned by the integrity of the Jacobian matrix. We investigate the trade-off between strict convergence criteria, which insure a better integrity of derivatives, and loose convergence criteria, which reduce computation time. The results obtained with the classical method (GLMA) are compared with the results of an emerging method, the Iterative Ensemble Smoother (IES). Some guidelines are eventually provided for parameter estimation of large-scale multi-layered groundwater models.</p>


2018 ◽  
Vol 861 ◽  
pp. 886-900 ◽  
Author(s):  
Kristy L. Schlueter-Kuck ◽  
John O. Dabiri

Lagrangian data assimilation is a complex problem in oceanic and atmospheric modelling. Tracking drifters in large-scale geophysical flows can involve uncertainty in drifter location, complex inertial effects and other factors which make comparing them to simulated Lagrangian trajectories from numerical models extremely challenging. Temporal and spatial discretisation, factors necessary in modelling large scale flows, also contribute to separation between real and simulated drifter trajectories. The chaotic advection inherent in these turbulent flows tends to separate even closely spaced tracer particles, making error metrics based solely on drifter displacements unsuitable for estimating model parameters. We propose to instead use error in the coherent structure colouring (CSC) field to assess model skill. The CSC field provides a spatial representation of the underlying coherent patterns in the flow, and we show that it is a more robust metric for assessing model accuracy. Through the use of two test cases, one considering spatial uncertainty in particle initialisation, and one examining the influence of stochastic error along a trajectory and temporal discretisation, we show that error in the coherent structure colouring field can be used to accurately determine single or multiple simultaneously unknown model parameters, whereas a conventional error metric based on error in drifter displacement fails. Because the CSC field enhances the difference in error between correct and incorrect model parameters, error minima in model parameter sweeps become more distinct. The effectiveness and robustness of this method for single and multi-parameter estimation in analytical flows suggest that Lagrangian data assimilation for real oceanic and atmospheric models would benefit from a similar approach.


2021 ◽  
Author(s):  
Kathrin Menberg ◽  
Asal Bidarmaghz ◽  
Alastair Gregory ◽  
Ruchi Choudhary ◽  
Mark Girolami

<p>The increased use of the urban subsurface for multiple purposes, such as anthropogenic infrastructures and geothermal energy applications, leads to an urgent need for large-scale sophisticated modelling approaches for coupled mass and heat transfer. However, such models are subject to large uncertainties in model parameters, the physical model itself and in available measured data, which is often rare. Thus, the robustness and reliability of the computer model and its outcomes largely depend on successful parameter estimation and model calibration, which are often hampered by the computational burden of large-scale coupled models.</p><p>To tackle this problem, we present a novel Bayesian approach for parameter estimation, which allows to account for different sources of uncertainty, is capable of dealing with sparse field data and makes optimal use of the output data from computationally expensive numerical model runs. This is achieved by combining output data from different models that represent the same physical problem, but at different levels of fidelity, e.g. reflected by different spatial resolution, i.e. different model discretization. Our framework combines information from a few parametric model outputs from a physically accurate, but expensive, high-fidelity computer model, with a larger number of evaluations from a less expensive and less accurate low-fidelity model. This enables us to include accurate information about the model output at sparse points in the parameter space, as well as dense samples across the entire parameter space, albeit with a lower physical accuracy.</p><p>We first apply the multi-fidelity approach to a simple 1D analytical heat transfer model, and secondly on a semi-3D coupled mass and heat transport numerical model, and estimate the unknown model parameters. By using synthetic data generated with known parameter values, we are able to test the reliability of the new method, as well as the improved performance over a single-fidelity approach, under different framework settings. Overall, the results from the analytical and numerical model show that combining 50 runs of the low resolution model with data from only 10 runs of a higher resolution model significantly improves the posterior distribution results, both in terms of agreement with the true parameter values and the confidence interval around this value. The next steps for further testing of the method are employing real data from field measurements and adding statistical formulations for model calibration and prediction based on the inferred posterior distributions of the estimated parameters.</p>


Author(s):  
V. Skibchyk ◽  
V. Dnes ◽  
R. Kudrynetskyi ◽  
O. Krypuch

Аnnotation Purpose. To increase the efficiency of technological processes of grain harvesting by large-scale agricultural producers due to the rational use of combine harvesters available on the farm. Methods. In the course of the research the methods of system analysis and synthesis, induction and deduction, system-factor and system-event approaches, graphic method were used. Results. Characteristic events that occur during the harvesting of grain crops, both within a single production unit and the entire agricultural producer are identified. A method for predicting time intervals of use and downtime of combine harvesters of production units has been developed. The roadmap of substantiation the rational seasonal scenario of the use of grain harvesters of large-scale agricultural producers is developed, which allows estimating the efficiency of each of the scenarios of multivariate placement of grain harvesters on fields taking into account influence of natural production and agrometeorological factors on the efficiency of technological cultures. Conclusions 1. Known scientific and methodological approaches to optimization of machine used in agriculture do not take into account the risks of losses of crops due to late harvesting, as well as seasonal natural and agrometeorological conditions of each production unit of the farmer, which requires a new approach to the rational use of rational seasonal combines of large agricultural producers. 2. The developed new approach to the substantiation of the rational seasonal scenario of the use of combined harvesters of large-scale agricultural producers allows taking into account the costs of harvesting of grain and the cost of the lost crop because of the lateness of harvesting at optimum variants of attraction of additional free combine harvesters. provides more profit. 3. The practical application of the developed road map will allow large-scale agricultural producers to use combine harvesters more efficiently and reduce harvesting costs. Keywords: combine harvesters, use, production divisions, risk, seasonal scenario, large-scale agricultural producers.


Author(s):  
S. Pragati ◽  
S. Kuldeep ◽  
S. Ashok ◽  
M. Satheesh

One of the situations in the treatment of disease is the delivery of efficacious medication of appropriate concentration to the site of action in a controlled and continual manner. Nanoparticle represents an important particulate carrier system, developed accordingly. Nanoparticles are solid colloidal particles ranging in size from 1 to 1000 nm and composed of macromolecular material. Nanoparticles could be polymeric or lipidic (SLNs). Industry estimates suggest that approximately 40% of lipophilic drug candidates fail due to solubility and formulation stability issues, prompting significant research activity in advanced lipophile delivery technologies. Solid lipid nanoparticle technology represents a promising new approach to lipophile drug delivery. Solid lipid nanoparticles (SLNs) are important advancement in this area. The bioacceptable and biodegradable nature of SLNs makes them less toxic as compared to polymeric nanoparticles. Supplemented with small size which prolongs the circulation time in blood, feasible scale up for large scale production and absence of burst effect makes them interesting candidates for study. In this present review this new approach is discussed in terms of their preparation, advantages, characterization and special features.


Author(s):  
M. E. J. Newman ◽  
R. G. Palmer

Developed after a meeting at the Santa Fe Institute on extinction modeling, this book comments critically on the various modeling approaches. In the last decade or so, scientists have started to examine a new approach to the patterns of evolution and extinction in the fossil record. This approach may be called "statistical paleontology," since it looks at large-scale patterns in the record and attempts to understand and model their average statistical features, rather than their detailed structure. Examples of the patterns these studies examine are the distribution of the sizes of mass extinction events over time, the distribution of species lifetimes, or the apparent increase in the number of species alive over the last half a billion years. In attempting to model these patterns, researchers have drawn on ideas not only from paleontology, but from evolutionary biology, ecology, physics, and applied mathematics, including fitness landscapes, competitive exclusion, interaction matrices, and self-organized criticality. A self-contained review of work in this field.


2021 ◽  
Vol 11 (10) ◽  
pp. 4575
Author(s):  
Eduardo Fernández ◽  
Nelson Rangel-Valdez ◽  
Laura Cruz-Reyes ◽  
Claudia Gomez-Santillan

This paper addresses group multi-objective optimization under a new perspective. For each point in the feasible decision set, satisfaction or dissatisfaction from each group member is determined by a multi-criteria ordinal classification approach, based on comparing solutions with a limiting boundary between classes “unsatisfactory” and “satisfactory”. The whole group satisfaction can be maximized, finding solutions as close as possible to the ideal consensus. The group moderator is in charge of making the final decision, finding the best compromise between the collective satisfaction and dissatisfaction. Imperfect information on values of objective functions, required and available resources, and decision model parameters are handled by using interval numbers. Two different kinds of multi-criteria decision models are considered: (i) an interval outranking approach and (ii) an interval weighted-sum value function. The proposal is more general than other approaches to group multi-objective optimization since (a) some (even all) objective values may be not the same for different DMs; (b) each group member may consider their own set of objective functions and constraints; (c) objective values may be imprecise or uncertain; (d) imperfect information on resources availability and requirements may be handled; (e) each group member may have their own perception about the availability of resources and the requirement of resources per activity. An important application of the new approach is collective multi-objective project portfolio optimization. This is illustrated by solving a real size group many-objective project portfolio optimization problem using evolutionary computation tools.


Energies ◽  
2020 ◽  
Vol 14 (1) ◽  
pp. 176
Author(s):  
Iñigo Aramendia ◽  
Unai Fernandez-Gamiz ◽  
Adrian Martinez-San-Vicente ◽  
Ekaitz Zulueta ◽  
Jose Manuel Lopez-Guede

Large-scale energy storage systems (ESS) are nowadays growing in popularity due to the increase in the energy production by renewable energy sources, which in general have a random intermittent nature. Currently, several redox flow batteries have been presented as an alternative of the classical ESS; the scalability, design flexibility and long life cycle of the vanadium redox flow battery (VRFB) have made it to stand out. In a VRFB cell, which consists of two electrodes and an ion exchange membrane, the electrolyte flows through the electrodes where the electrochemical reactions take place. Computational Fluid Dynamics (CFD) simulations are a very powerful tool to develop feasible numerical models to enhance the performance and lifetime of VRFBs. This review aims to present and discuss the numerical models developed in this field and, particularly, to analyze different types of flow fields and patterns that can be found in the literature. The numerical studies presented in this review are a helpful tool to evaluate several key parameters important to optimize the energy systems based on redox flow technologies.


Sign in / Sign up

Export Citation Format

Share Document