Canyon Building Ventilation System Dynamic Model Optimization Study

Author(s):  
Franklin F. K. Chen ◽  
B. Ronald Moncrief

Abstract A canyon building houses special nuclear material processing facilities in two canyon like structures, each with approximately a million cubic feet of air space and a hundred thousand hydraulic equivalent feet of ductwork of various cross sections. The canyon ventilation system is a “once through” design with separate supply and exhaust fans, utilizes two large sand filters to remove radionuclide particulate matter, and exhausts through a tall stack. The ventilation equipment is similar to most industrial ventilation systems. However, in a canyon building, nuclear contamination prohibits access to a large portion of the system and therefore limits the kind of plant data possible. The facility investigated is 40 years old and is operating with original or replacement equipment of comparable antiquity. These factors, access and aged equipment, present a challenge in gauging the performance of canyon ventilation, particularly under uncommon operating conditions. The ability to assess canyon ventilation system performance became critical with time, as the system took on additional exhaust loads and aging equipment approached design maximum. Many “What if?” questions, needed to address modernization/safety issues, are difficult to answer without a dynamic model. This paper describes the development, the validation and the utilization of a dynamic model to analyze the capacity of this ventilation system, under many unusual but likely conditions. The development of a ventilation model with volume and hydraulics of this scale is unique. The resultant model resolutions of better than 0.05″wg under normal plant conditions and approximately 0.2″wg under all plant conditions achievable with a desktop computer is a benchmark of the power of micro-computers. The detail planning and the persistent execution of large scale plant experiments under very restrictive conditions not only produced data to validate the model but lent credence to subsequent applications of the model to mission oriented analysis. Modelling methodology adopted a two parameter space approach, rational parameters and irrational parameters. Rational parameters, such as fan age-factors, idle parameters, infiltration areas and tunnel hydraulic parameters are deduced from plant data based on certain hydraulic models. Due to limited accessibility and therefore partial data availability, the identification of irrational model parameters, such as register positions and unidentifiable infiltrations, required unique treatment of the parameter space. These unique parameters were identified by a numerical search strategy to minimize a set of performance indices. With the large number of parameters, this further attests to our strategy in utilizing the computing power of modern micros. Nine irrational parameters at five levels and 12 sets of plant data, counting up to 540 runs, were completely searched over the time span of a long weekend. Some key results, in assessing emergency operation, in evaluating modernization options, are presented to illustrate the functions of the dynamic model.

2021 ◽  
Vol 17 (12) ◽  
pp. e1009718
Author(s):  
Zhuo-Cheng Xiao ◽  
Kevin K. Lin ◽  
Lai-Sang Young

Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a “biologically plausible” region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space.


2001 ◽  
Vol 124 (1) ◽  
pp. 62-66 ◽  
Author(s):  
Pei-Sun Zung ◽  
Ming-Hwei Perng

This paper presents a handy nonlinear dynamic model for the design of a two stage pilot pressure relief servo-valve. Previous surveys indicate that the performance of existing control valves has been limited by the lack of an accurate dynamic model. However, most of the existing dynamic models of pressure relief valves are developed for the selection of a suitable valve for a hydraulic system, and assume model parameters which are not directly controllable during the manufacturing process. As a result, such models are less useful for a manufacturer eager to improve the performance of a pressure valve. In contrast, model parameters in the present approach have been limited to dimensions measurable from the blue prints of the valve such that a specific design can be evaluated by simulation before actually manufacturing the valve. Moreover, the resultant model shows excellent agreement with experiments in a wide range of operating conditions.


2021 ◽  
Author(s):  
Zhuo-Cheng Xiao ◽  
Kevin K Lin ◽  
Lai-Sang Young

Constraining the many biological parameters that govern cortical dynamics is computationally and conceptually difficult because of the curse of dimensionality. This paper addresses these challenges by proposing (1) a novel data-informed mean-field (MF) approach to efficiently map the parameter space of network models; and (2) an organizing principle for studying parameter space that enables the extraction biologically meaningful relations from this high-dimensional data. We illustrate these ideas using a large-scale network model of the Macaque primary visual cortex. Of the 10-20 model parameters, we identify 7 that are especially poorly constrained, and use the MF algorithm in (1) to discover the firing rate contours in this 7D parameter cube. Defining a "biologically plausible" region to consist of parameters that exhibit spontaneous Excitatory and Inhibitory firing rates compatible with experimental values, we find that this region is a slightly thickened codimension-1 submanifold. An implication of this finding is that while plausible regimes depend sensitively on parameters, they are also robust and flexible provided one compensates appropriately when parameters are varied. Our organizing principle for conceptualizing parameter dependence is to focus on certain 2D parameter planes that govern lateral inhibition: Intersecting these planes with the biologically plausible region leads to very simple geometric structures which, when suitably scaled, have a universal character independent of where the intersections are taken. In addition to elucidating the geometry of the plausible region, this invariance suggests useful approximate scaling relations. Our study offers, for the first time, a complete characterization of the set of all biologically plausible parameters for a detailed cortical model, which has been out of reach due to the high dimensionality of parameter space.


Author(s):  
R. WHALLEY ◽  
A. ABDUL-AMEER

In this feasibility study, a large scale ventilation system comprising spatially dispersed enclosed volumes, fans, ducting and airways is considered. Analytical procedures enabling the construction of simple, compact models including the relatively pointwise and significantly distributed system elements are proposed. Modeling accuracy, with the incorporation of the entrance and exit impedances and the airway, continuous energy storage and dissipation effects are emphasized. Output flow maximization, under quiescent operating conditions is investigated and the optimum relationships between the airway characteristic impedance, entrance and exit resistances are established. The minimization of the vibration and turbulence arising from the continuous compression/expansion effects arising from the input–output volume airflow difference is achieved, whilst simultaneously maximizing the output volume airflow. Variations in the parameter values are employed to confirm the effectiveness of operating under optimum conditions, for ventilation system airways with various dimensions and characteristics.


Author(s):  
Gabriele Lucherini ◽  
Vittorio Michelassi ◽  
Stefano Minotti

Abstract A gas turbine is usually installed inside a package to reduce the acoustics emissions and protect against adverse environmental conditions. An enclosure ventilation system is keeps temperatures under acceptable limits and dilutes any potentially explosive accumulation of gas due to unexpected leakages. The functional and structural integrity as well as certification needs of the instrumentation and auxiliary systems in the package require that temperatures do not exceed a given threshold. Moreover, accidental fuel gas leakages inside the package must be studied in detail for safety purposes as required by ISO21789. CFD is routinely used in BHGE (Baker Hughes, a GE Company) to assist in the design and verification of the complete enclosure and ventilation system. This may require multiple CFD runs of very complex domains and flow fields in several operating conditions, with a large computational effort. Modeling assumptions and simulation set-up in terms of turbulence and thermal models, and the steady or unsteady nature of the simulations must be carefully assessed. In order to find a good compromise between accuracy and computational effort the present work focuses on the analysis of three different approaches, RANS, URANS and Hybrid-LES. The different computational approaches are first applied to an isothermal scaled-down model for validation purposes where it was possible to determine the impact of the large-scale flow unsteadiness and compare with measurements. Then, the analysis proceeds to a full-scale real aero-derivative gas turbine package. in which the aero and thermal field were investigated by a set of URANS and Hybrid-LES that includes the heat released by the engine. The different approaches are compared by analyzing flow and temperature fields. Finally, an accidental gas leak and the subsequent gas diffusion and/or accumulation inside the package are studied and compared. The outcome of this work highlights how the most suitable approach to be followed for industrial purposes depends on the goal of the CFD study and on the specific scenario, such as NPI Program or RQS Project.


2019 ◽  
Vol 12 (9) ◽  
pp. 5161-5181 ◽  
Author(s):  
Tongshu Zheng ◽  
Michael H. Bergin ◽  
Ronak Sutaria ◽  
Sachchida N. Tripathi ◽  
Robert Caldow ◽  
...  

Abstract. Wireless low-cost particulate matter sensor networks (WLPMSNs) are transforming air quality monitoring by providing particulate matter (PM) information at finer spatial and temporal resolutions. However, large-scale WLPMSN calibration and maintenance remain a challenge. The manual labor involved in initial calibration by collocation and routine recalibration is intensive. The transferability of the calibration models determined from initial collocation to new deployment sites is questionable, as calibration factors typically vary with the urban heterogeneity of operating conditions and aerosol optical properties. Furthermore, the stability of low-cost sensors can drift or degrade over time. This study presents a simultaneous Gaussian process regression (GPR) and simple linear regression pipeline to calibrate and monitor dense WLPMSNs on the fly by leveraging all available reference monitors across an area without resorting to pre-deployment collocation calibration. We evaluated our method for Delhi, where the PM2.5 measurements of all 22 regulatory reference and 10 low-cost nodes were available for 59 d from 1 January to 31 March 2018 (PM2.5 averaged 138±31 µg m−3 among 22 reference stations), using a leave-one-out cross-validation (CV) over the 22 reference nodes. We showed that our approach can achieve an overall 30 % prediction error (RMSE: 33 µg m−3) at a 24 h scale, and it is robust as it is underscored by the small variability in the GPR model parameters and in the model-produced calibration factors for the low-cost nodes among the 22-fold CV. Of the 22 reference stations, high-quality predictions were observed for those stations whose PM2.5 means were close to the Delhi-wide mean (i.e., 138±31 µg m−3), and relatively poor predictions were observed for those nodes whose means differed substantially from the Delhi-wide mean (particularly on the lower end). We also observed washed-out local variability in PM2.5 across the 10 low-cost sites after calibration using our approach, which stands in marked contrast to the true wide variability across the reference sites. These observations revealed that our proposed technique (and more generally the geostatistical technique) requires high spatial homogeneity in the pollutant concentrations to be fully effective. We further demonstrated that our algorithm performance is insensitive to training window size as the mean prediction error rate and the standard error of the mean (SEM) for the 22 reference stations remained consistent at ∼30 % and ∼3 %–4 %, respectively, when an increment of 2 d of data was included in the model training. The markedly low requirement of our algorithm for training data enables the models to always be nearly the most updated in the field, thus realizing the algorithm's full potential for dynamically surveilling large-scale WLPMSNs by detecting malfunctioning low-cost nodes and tracking the drift with little latency. Our algorithm presented similarly stable 26 %–34 % mean prediction errors and ∼3 %–7 % SEMs over the sampling period when pre-trained on the current week's data and predicting 1 week ahead, and therefore it is suitable for online calibration. Simulations conducted using our algorithm suggest that in addition to dynamic calibration, the algorithm can also be adapted for automated monitoring of large-scale WLPMSNs. In these simulations, the algorithm was able to differentiate malfunctioning low-cost nodes (due to either hardware failure or under the heavy influence of local sources) within a network by identifying aberrant model-generated calibration factors (i.e., slopes close to zero and intercepts close to the Delhi-wide mean of true PM2.5). The algorithm was also able to track the drift of low-cost nodes accurately within 4 % error for all the simulation scenarios. The simulation results showed that ∼20 reference stations are optimum for our solution in Delhi and confirmed that low-cost nodes can extend the spatial precision of a network by decreasing the extent of pure interpolation among only reference stations. Our solution has substantial implications in reducing the amount of manual labor for the calibration and surveillance of extensive WLPMSNs, improving the spatial comprehensiveness of PM evaluation, and enhancing the accuracy of WLPMSNs.


Author(s):  
William A. Lane ◽  
Curtis Storlie ◽  
Christopher Montgomery ◽  
Emily M. Ryan

As the effects of climate change continue to rise with increasing carbon dioxide emission rates, it is imperative that we develop an efficient method for carbon capture. This paper outlines the framework used to break down a large, complex carbon capture system into smaller unit problems for model validation, and uncertainty quantification. We use this framework to investigate the uncertainty and sensitivity of the hydrodynamics of a bubbling fluidized bed. Using the open-source computational fluid dynamics code MFIX we simulate a bubbling fluidized bed with an immersed horizontal tube bank. Mesh resolution and statistical steady state studies are conducted to identify the optimal operating conditions. The preliminary results show good agreement with experimental data from literature. Employing statistical sampling and analysis techniques we designed a set of simulations to quantify the sensitivity of the model to model parameters that are difficult to measure, including: coefficients of restitution, friction angles, packed bed void fraction, and drag models. Initial sensitivity analysis results indicate that no parameters may be omitted. Further uncertainty quantification analysis is underway to investigate and quantify the effects of model parameters on the simulations results.


Author(s):  
Aravind Seshadri ◽  
Prabhakar R. Pagilla

This paper presents an optimal web guiding strategy based on the dynamic analysis of the lateral web behavior and a new fiber optic lateral web position measurement sensor. First, a lateral dynamic model of a moving web is revisited with an emphasis on correct application of appropriate boundary conditions. Then the dynamic models of two common intermediate guides (remotely pivoted guide and offset-pivot guide) are investigated. The effect of various model parameters on lateral web behavior is analyzed and discussions on proper selection of the parameters are given. Based on the model analysis, we discuss the design of a linear quadratic optimal controller that is capable of accommodating structured parametric uncertainties in the lateral dynamic model. The optimal guide control system is evaluated by a series of experiments on a web platform with different web materials under various operating conditions. Implementation of the controller with a new fiber optic lateral sensor for different scenarios is discussed. Results show good guiding performance in the presence of disturbances and with uncertainties in the model parameters.


2019 ◽  
Author(s):  
Tongshu Zheng ◽  
Michael H. Bergin ◽  
Ronak Sutaria ◽  
Sachchida N. Tripathi ◽  
Robert Caldow ◽  
...  

Abstract. Wireless low-cost particulate matter sensor networks (WLPMSNs) are transforming air quality monitoring by providing PM information at finer spatial and temporal resolutions; however, large-scale WLPMSN calibration and maintenance remain a challenge because the manual labor involved in initial calibration by collocation and routine recalibration is intensive, the transferability of the calibration models determined from initial collocation to new deployment sites is questionable as calibration factors typically vary with urban heterogeneity of operating conditions and aerosol optical properties, and the stability of low-cost sensors can develop drift or degrade over time. This study presents a simultaneous Gaussian Process regression (GPR) and simple linear regression pipeline to calibrate and monitor dense WLPMSNs on the fly by leveraging all available reference monitors across an area without resorting to pre-deployment collocation calibration. We evaluated our method for Delhi where the PM2.5 measurements of all 22 regulatory reference and 10 low-cost nodes were available in 59 valid days from 1 January 2018 to 31 March 2018 (PM2.5 averaged 138 ± 31 μg m−3 among 22 reference stations) using a leave-one-out cross-validation (CV) over the 22 reference nodes. We showed that our approach can achieve an overall 30 % prediction error (RMSE: 33 μg m−3) at a 24 h scale and is robust as underscored by the small variability in the GPR model parameters and in the model-produced calibration factors for the low-cost nodes among the 22-fold CV. We revealed that the accuracy of our calibrations depends on the degree of homogeneity of PM concentrations, and decreases with increasing local source contributions. As by-products of dynamic calibration, our algorithm can be adapted for automated large-scale WLPMSN monitoring as simulations proved its capability of differentiating malfunctioning or singular low-cost nodes within a network via model-generated calibration factors with the aberrant nodes having slopes close to 0 and intercepts close to the global mean of true PM2.5 and of tracking the drift of low-cost nodes accurately within 4 % error for all the simulation scenarios. The simulation results showed that ~20 reference stations are optimum for our solution in Delhi and confirmed that low-cost nodes can extend the spatial precision of a network by decreasing the extent of pure interpolation among only reference stations. Our solution has substantial implications in reducing the amount of manual labor for the calibration and surveillance of extensive WLPMSNs, improving the spatial comprehensiveness of PM evaluation, and enhancing the accuracy of WLPMSNs.


2019 ◽  
Author(s):  
Eva-Maria Kapfer ◽  
Paul Stapor ◽  
Jan Hasenauer

AbstractMathematical models based on ordinary differential equations have been employed with great success to study complex biological systems. With soaring data availability, more and more models of increasing size are being developed. When working with these large-scale models, several challenges arise, such as high computation times or poor identifiability of model parameters. In this work, we review and illustrate the most common challenges using a published model of cellular metabolism. We summarize currently available methods to deal with some of these challenges while focusing on reproducibility and reusability of models, efficient and robust model simulation and parameter estimation.


Sign in / Sign up

Export Citation Format

Share Document