operating limits
Recently Published Documents


TOTAL DOCUMENTS

225
(FIVE YEARS 54)

H-INDEX

20
(FIVE YEARS 3)

Author(s):  
Mario A. Rios ◽  
Maria F. Perez

<p>Planning of high voltage direct current (HVDC) grids requires inclusion of reliability assessment of alternatives under study. This paper proposes a methodology to evaluate the adequacy of voltage source converter/VSC-HVDC networks. The methodology analyses the performance of the system using N-1 and N-2 contingencies in order to detect weaknesses in the DC network and evaluates two types of remedial actions to keep the entire system under the acceptable operating limits. The remedial actions are applied when a violation of these limits on the DC system occurs; those include topology changes in the network and adjustments of power settings of VSC converter stations. The CIGRE B4 DC grid test system is used for evaluating the reliability/adequacy performance by means of the proposed methodology in this paper. The proposed remedial actions are effective for all contingencies; then, numerical results are as expected. This work is useful for planning and operation of grids based on VSC-HVDC technology.</p>


2021 ◽  
Author(s):  
Sean J. Dee ◽  
Russell A. Ogle ◽  
Matthew S. Walters

2021 ◽  
Vol 47 (11) ◽  
pp. 1119-1127
Author(s):  
S. Yu. Medvedev ◽  
A. A. Martynov ◽  
S. V. Konovalov ◽  
V. M. Leonov ◽  
V. E. Lukash ◽  
...  

Abstract Studying stationary regimes with high plasma confinement in a tokamak with reactor technologies (TRT) [1] involves calculating the plasma stability taking into account the influence of the current density profiles and pressure gradient in the pedestal near the boundary. At the same time, the operating limits should be determined by the parameters of the pedestal, which, in particular, are set by the stability limit of the peeling–ballooning modes that trigger the peripheral disruption of edge localized modes (ELM). Using simulation of the quasi-equilibrium evolution of the plasma by the ASTRA and DINA codes, as well as a simulator of magnetohydrodynamic (MHD) modes localized at the boundary of the plasma torus based on the KINX code, stability calculations are performed for different plasma scenarios in the TRT with varying plasma density and temperature profiles, as well as the corresponding bootstrap current density in the pedestal region. At the same time, experimental scalings for the width of the pedestal are used. The obtained pressure values are below the limits for an ITER-like plasma due to the lower triangularity and higher aspect ratio of TRT plasma. For the same reason, the reversal of magnetic field shear in the pedestal occurs at a lower current density, which causes the instability of modes with low toroidal wave numbers and reduces the effect of diamagnetic stabilization.


2021 ◽  
Author(s):  
Alison Wells ◽  
Chad L. Pope

Traditional component pass/fail design analysis and testing protocol drives excessively conservative operating limits and setpoints as well as unnecessarily large margins of safety. Component performance testing coupled with failure probability model development can support selection of more flexible operating limits and setpoints as well as softening defense-in-depth elements. This chapter discuses the process of Bayesian regression fragility model development using Markov Chain Monte Carlo methods and model checking protocol using three types of Bayesian p-values. The chapter also discusses application of the model development and testing techniques through component flooding performance experiments associated with industrial steel doors being subjected to a rising water scenario. These component tests yield the necessary data for fragility model development while providing insight into development of testing protocol that will yield meaningful data for fragility model development. Finally, the chapter discusses development and selection of a fragility model for industrial steel door performance when subjected to a water-rising scenario.


2021 ◽  
Author(s):  
David Hodapp ◽  
Stephan den Breejen ◽  
Tomasz Pniewski ◽  
Hai Ming Wang ◽  
Zhen Lin

Abstract A critical element in heavy transport design is the identification of design wave conditions. Since most transports are one-of-a-kind, statistically meaningful comparisons of observed vs. design conditions are nonexistent. The present paper examines the experience from a recent oil and gas giga-project, encompassing 56 replicate voyages from Korea to the Suez Canal. In doing so, this paper provides an anchor point for assessing the real-world likelihood of exceeding design wave conditions during heavy transport. Voyage maximum wave conditions from the 56 replicate voyages are found to closely follow a Weibull distribution, allowing for the ready evaluation of observed 1-in-N voyage extremes. These observed wave conditions are compared with corresponding design values on both a year-round and seasonal (3-month) basis. Three important observations are drawn from these comparisons. First, operating limits established by heavy transport contractors to avoid waves above a predetermined threshold do not eliminate the need to design for higher wave conditions. Over the 56 replicate voyages studied, observed wave conditions slightly exceeded the contractor's self-imposed operating limit (i.e., by approximately 10% or less) on five separate voyages; on a sixth voyage, this same operating limit was exceeded by approximately 40%. Second, simplified tools for evaluating design wave conditions using Global Wave Statistics do not consistently estimate the 1-in-10 voyage extreme. While the simplified approach is shown to be conservative for the route studied, the associated design margin varies considerably throughout the year. Third, SafeTrans voyage simulations are observed to well-predict the 1-in-10 voyage extreme for the route studied.


2021 ◽  
Author(s):  
Michael Swindeman ◽  
Erik J. Pavlina ◽  
Jorge Perdomo

Abstract Establishing safe operating limits for equipment operating in hydrogen service remains a concern for the petrochemical industry. A methodology to prioritize equipment, inform future inspections, and guide inspection discovery path forward decisions has been in development for the past several years. The approach is multi-tiered and considers the process conditions that lead to risk of damage, the potential extent of damage, and the effect of applied and residual stresses on the rate of damage growth. The key metric for Fe-C steels and Fe-C-0.5Mo steels is the damage index, which is calculated from a damage rate equation that has been calibrated to select laboratory test data and reported HTHA incidents documented in API 941. The damage rate and significance of damage is handled by considering damage as both diffuse (continuum assessment) and localized (crack/flaw assessment). This approach was originally intended as a means for prioritizing inspection. However, once margins are established to account for uncertainties in operating conditions and material variation, so-called time-dependent Nelson curves can be generated for use in design. This paper provides an overview of the HTHA damage modeling technique and results of recent experimental work, including component-level testing used for validation of the model.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4141
Author(s):  
Christine Mounaïm-Rousselle ◽  
Pierre Bréquigny ◽  
Clément Dumand ◽  
Sébastien Houillé

The objective of this paper is to provide new data about the possibility of using ammonia as a carbon-free fuel in a spark-ignition engine. A current GDI PSA engine (Compression Ratio 10.5:1) was chosen in order to update the results available in the literature mainly obtained in the CFR engine. Particular attention was paid to determine the lowest possible load limit when the engine is supplied with pure ammonia or a small amount of H2, depending on engine speed, in order to highlight the limitation during cold start conditions. It can be concluded that this engine can run stably in most of these operating conditions with less than 10% H2 (of the total fuel volume) added to NH3. Measurements of exhaust pollutants, and in particular NOx, have made it possible to evaluate the possibility of diluting the intake gases and its limitation during combustion with pure H2 under slightly supercharged conditions. In conclusion, the 10% dilution limit allows a reduction of up to 40% in NOx while guaranteeing stable operation.


Author(s):  
Vladimir S. Derevschikov ◽  
Janna V. Veselovskaya ◽  
Anton S. Shalygin ◽  
Dmitry A.Yatsenko ◽  
Andrey Z. Sheshkovas ◽  
...  

2021 ◽  
Author(s):  
T. Dielenschneider ◽  
J. Ratz ◽  
S. Leichtfuß ◽  
H.-P. Schiffer ◽  
W. Eißler

Abstract The surge limit of compressors is one key parameter in the design process of modern turbocharger compressors for automotive applications. Since the compressor is operated close to the surge limit, the determination of the surge limit is of high importance. Unfortunately, the determination of the surge limit with any numerical method with high accuracy is still an unsolved challenge. The numerical surge limit is often determined by the operating point with the minimum converged mass flow rate. But, as this investigation will clearly show, this cannot be used as a surge limit of the investigated compressor configuration. In this paper it will be shown that a more differentiated approach is required when it comes to operating limits. Especially, two different operating limits can be determined. A methodology for the determination of each limit will be presented. One is based on the system approach defined by Greitzer and the other one is based on the analysis of the low momentum fluid in the shroud region of the compressor wheel. Finally, experimental data will be used as benchmark data for both limits. The determination of the experimental surge limit is based on the analysis of transient experimental pressure signals. This is achieved through a fourier analysis of the unsteady compressor outlet pressure signal for transient surge runs.


Sign in / Sign up

Export Citation Format

Share Document