Effect of Turbulence Variation on Extreme Loads Prediction for Wind Turbines

2002 ◽  
Vol 124 (4) ◽  
pp. 387-395 ◽  
Author(s):  
Patrick J. Moriarty ◽  
William E. Holley ◽  
Sandy Butterfield

The effect of varying turbulence levels on long-term loads extrapolation techniques was examined using a joint probability density function of both mean wind speed and turbulence level for loads calculations. The turbulence level has a dramatic effect on the statistics of moment maxima extracted from aeroelastic simulations. Maxima from simulations at lower turbulence levels are more deterministic and become dominated by the stochastic component as turbulence level increases. Short-term probability distributions were calculated using four different moment-based fitting methods. Several hundred of these distributions were used to calculate a long-term probability function. From the long-term probability, 1- and 50-yr extreme loads were estimated. As an alternative, using a normal conditional distribution of turbulence level produced a long-term load comparable to that of a log-normal conditional distribution and may be more straightforward to implement. A parametric model of the moments was also used to estimate the extreme loads. The parametric model required less data, but predicted significantly lower loads than the empirical model. An input extrapolation technique was also examined. Extrapolating the turbulence level prior to input into the aeroelastic code simplifies the loads extrapolation procedure but, in this case, produces loads lower than most empirical models and may be non-conservative in general.

Author(s):  
Patrick J. Moriarty ◽  
William E. Holley ◽  
Sandy Butterfield

The effect of varying turbulence levels on long-term loads extrapolation techniques was examined using a joint probability density function of both mean wind speed and turbulence level for loads calculations. The turbulence level has a dramatic effect on the statistics of moment maxima extracted from aeroelastic simulations. Maxima from simulations at lower turbulence levels are more deterministic and become dominated by the stochastic component as turbulence level increases. Short-term probability distributions were calculated using four different moment-based fitting methods. Several hundred of these distributions were used to calculate a long-term probability function. From the long-term probability, 1- and 50-year extreme loads were estimated. As an alternative, using a normal distribution of turbulence level produced a long-term load comparable to that of a log-normal distribution and may be more straightforward to implement. A parametric model of the moments was also used to estimate the extreme loads. The parametric model predicted nearly identical loads to the empirical model and required less data. An input extrapolation technique was also examined. Extrapolating the turbulence level prior to input into the aeroelastic code simplifies the loads extrapolation procedure but, in this case, produces loads lower than the empirical model and may be non-conservative in general.


1991 ◽  
Vol 24 (6) ◽  
pp. 25-33
Author(s):  
A. J. Jakeman ◽  
P. G. Whitehead ◽  
A. Robson ◽  
J. A. Taylor ◽  
J. Bai

The paper illustrates analysis of the assumptions of the statistical component of a hybrid modelling approach for predicting environmental extremes. This shows how to assess the applicability of the approach to water quality problems. The analysis involves data on stream acidity from the Birkenes catchment in Norway. The modelling approach is hybrid in that it uses: (1) a deterministic or process-based description to simulate (non-stationary) long term trend values of environmental variables, and (2) probability distributions which are superimposed on the trend values to characterise the frequency of shorter term concentrations. This permits assessment of management strategies and of sensitivity to climate variables by adjusting the values of major forcing variables in the trend model. Knowledge of the variability about the trend is provided by: (a) identification of an appropriate parametric form of the probability density function (pdf) of the environmental attribute (e.g. stream acidity variables) whose extremes are of interest, and (b) estimation of pdf parameters using the output of the trend model.


Author(s):  
Neil Bates ◽  
David Lee ◽  
Clifford Maier

This paper describes case studies involving crack detection in-line inspections and fitness for service assessments that were performed based on the inspection data. The assessments were used to evaluate the immediate integrity of the pipeline based on the reported features and the long-term integrity of the pipeline based on excavation data and probabilistic SCC and fatigue crack growth simulations. Two different case studies are analyzed, which illustrate how the data from an ultrasonic crack tool inspection was used to assess threats such as low frequency electrical resistance weld seam defects and stress corrosion cracking. Specific issues, such as probability of detection/identification and the length/depth accuracy of the tool, were evaluated to determine the suitability of the tool to accurately classify and size different types of defects. The long term assessment is based on the Monte Carlo method [1], where the material properties, pipeline details, crack growth parameters, and feature dimensions are randomly selected from certain specified probability distributions to determine the probability of failure versus time for the pipeline segment. The distributions of unreported crack-related features from the excavation program are used to distribute unreported features along the pipeline. Simulated crack growth by fatigue, SCC, or a combination of the two is performed until failure by either leak or rupture is predicted. The probability of failure calculation is performed through a number of crack growth simulations for each of the reported and unreported features and tallying their respective remaining lives. The results of the probabilistic analysis were used to determine the most effective and economical means of remediation by identifying areas or crack mechanisms that contribute most to the probability of failure.


2016 ◽  
Vol 156 (3) ◽  
pp. 577-585 ◽  
Author(s):  
B. Cabarrou ◽  
L. Belin ◽  
S. M. Somda ◽  
M. C. Falcou ◽  
J. Y. Pierga ◽  
...  

2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Kelin Lu ◽  
K. C. Chang ◽  
Rui Zhou

This paper addresses the problem of distributed fusion when the conditional independence assumptions on sensor measurements or local estimates are not met. A new data fusion algorithm called Copula fusion is presented. The proposed method is grounded on Copula statistical modeling and Bayesian analysis. The primary advantage of the Copula-based methodology is that it could reveal the unknown correlation that allows one to build joint probability distributions with potentially arbitrary underlying marginals and a desired intermodal dependence. The proposed fusion algorithm requires no a priori knowledge of communications patterns or network connectivity. The simulation results show that the Copula fusion brings a consistent estimate for a wide range of process noises.


Sign in / Sign up

Export Citation Format

Share Document