Analysis for a Generalized CHF Correlation in Uniformly Heated Vertical Round Tubes

2003 ◽  
Author(s):  
W. Jaewoo Shim ◽  
Joo-Yong Park ◽  
Ohyoung Kim

For empirical models based on the local condition hypothesis, few important parameters give significant correlations on the prediction of CHF (Critical Heat Flux). This work is a preliminary study to develop a generalized CHF correlation in uniformly heated vertical round tubes for water. For this analysis, a total of 8,912 CHF data points from 12 different published sources were used. This database consisted of following parameter ranges: 0.101 ≤ P (pressure) ≤ 20.679 MPa, 9.92 ≤ G (mass flux) ≤ 18,619.39 kg/m2s, 0.00102 ≤ D (diameter) ≤ 0.04468 m, 0.03 ≤ L (length) ≤ 4.97 m, 8.5 ≤ L/D ≤ 792.26, −609.33 ≤ Inlet subcooling ≤ 1,655.34 kJ/kg, 0.11 ≤ qc (CHF) ≤ 21.41 MW/m2, and −0.85 ≤ Xe (exit qualities) ≤ 1.58. Five representative CHF data sets at pressure conditions of 0.101, 5.001, 10, 16 and 20 MPa were selected, analyzed, and compared to evaluate the effects of parameters on the CHF. It has revealed that the major variables which influenced the CHF, other than the system pressure (P), were tube diameter (D), mass flux of water (G), and local true mass fraction of vapor (Xt). Square root of GXt and square root of D were the significant parameters that showed strong parametric trends of the data sets. The results of this study have reaffirmed the feasibility that an advanced generalized CHF correlation for uniformly heated vertical round tubes can be found.

Volume 3 ◽  
2004 ◽  
Author(s):  
W. Jaewoo Shim ◽  
Ji-Su Lee

In recent years it is well known that models based on the local condition hypothesis give significant correlations for the prediction of CHF (Critical Heat Flux), using only few local variables. In this work, a study was carried out to develop a generalized CHF correlation in vertical round tubes with uniform heat flux. For this analysis, a CHF database that composed of over 10,000 CHF data points, which were collected from 12 different sources, was used. The actual data used in the development of this correlation, after the elimination of some questionable data, consisted of 8,951 data points with the following parameter ranges: 0.101 ≤ P (pressure) ≤ 20.679 MPa, 9.92 ≤ G (mass flux) ≤ 18,619.39 kg/m2s, 0.00102 ≤ D (diameter) ≤ 0.04468 m, 0.03 ≤ L (length) ≤ 4.97 m, 0.11 ≤ qc (CHF) ≤ 21.42 MW/m2, and −0.87 ≤ Xe (exit qualities) ≤ 1.58. The result of this work showed that regardless of various flow patterns and regimes that exist in the wide flow conditions, the prediction of CHF can be made accurately with few major local variables: the system pressure (P), tube diameter (D), mass flux of water (G), and true mass flux of vapor (GXt). The new correlation was compared with 5 well-known CHF correlations published in world literature. The new correlation can predict CHF within the root mean square error of 13.44% using the heat balance method with average error of −1.34%.


Author(s):  
W. Jaewoo Shim ◽  
Joo-Yong Park

In this study, a total of 2,870 high pressure (70 bar ≤ P ≤ 206 bar) data points of critical heat flux (CHF) in uniformly heated round vertical tube for water were collected from 5 different published sources. The data consisted of following parameter ranges: 28.07 ≤ G (mass flux) ≤ 10,565.03 kg/m2s, 1.91 ≤ D (diameter) ≤44.68 mm, 40 ≤L (length) ≤4966 mm, 0.14 ≤qc (CHF) ≤ 9.94 MW/m2, and −0.85 ≤X (exit qualities) ≤ 1.22. With these data a comparative analysis is made on available correlations, and a new correlation is presented. The new high pressure CHF correlation, as in the low and medium pressure cases of earlier studies, comprised of local variables, namely, “true” mass quality, mass flux, tube diameter, and two parameters as a function of pressure only. This study reaffirms our earlier findings that by incorporating “true” mass quality in the local condition hypothesis, the prediction of CHF under these conditions can be obtained quite accurately, overcoming the difficulties of flow instability and buoyancy effects that are inherent in the phenomena. The new correlation predicts the CHF data significantly better than those currently available correlations, with average error 0.12% and rms error 13.52% by the heat balance method.


2005 ◽  
Author(s):  
W. Jaewoo Shim ◽  
Joo-Yong Park ◽  
Ji-Su Lee ◽  
Dong Kook Kim

In this study a method to predict CHF (Critical Heat Flux) in vertical round tubes with cosine heat flux distribution was examined. For this purpose a uniform correlation, based on local condition hypothesis, was developed from 9,366 CHF data points of uniform heat flux heaters. The CHF data points used were collected from 13 different sources had the following parameter ranges: 1.01 ≤ P (pressure) ≤ 206.79 bar, 9.92 ≤ G (mass flux) ≤ 18,619.39 kg/m2s, 0.00102 ≤ D (diameter) ≤ 0.04468 m, 0.0254 ≤ L (length) ≤ 4.966 m, 0.11 ≤ qc (CHF) ≤ 21.42 MW/m2, and −0.87 ≤ X (exit qualities) ≤ 1.58. The result of this work showed that the uniform CHF correlation could be used to predict CHF accurately in a non-uniform heat flux heater for wide flow conditions. Furthermore, the location, where CHF occurs in non-uniform heat flux distribution, can also be determined accurately with the local variables: the system pressure (P), tube diameter (D), mass flux of water (G), and true mass flux of vapor (GXt). The new correlation predicted CHF with cosine heat flux, 297 data points from 5 different published sources, within the root mean square error of 12.42% and average error of 1.06% using the heat balance method.


2012 ◽  
Vol 38 (2) ◽  
pp. 57-69 ◽  
Author(s):  
Abdulghani Hasan ◽  
Petter Pilesjö ◽  
Andreas Persson

Global change and GHG emission modelling are dependent on accurate wetness estimations for predictions of e.g. methane emissions. This study aims to quantify how the slope, drainage area and the TWI vary with the resolution of DEMs for a flat peatland area. Six DEMs with spatial resolutions from 0.5 to 90 m were interpolated with four different search radiuses. The relationship between accuracy of the DEM and the slope was tested. The LiDAR elevation data was divided into two data sets. The number of data points facilitated an evaluation dataset with data points not more than 10 mm away from the cell centre points in the interpolation dataset. The DEM was evaluated using a quantile-quantile test and the normalized median absolute deviation. It showed independence of the resolution when using the same search radius. The accuracy of the estimated elevation for different slopes was tested using the 0.5 meter DEM and it showed a higher deviation from evaluation data for steep areas. The slope estimations between resolutions showed differences with values that exceeded 50%. Drainage areas were tested for three resolutions, with coinciding evaluation points. The model ability to generate drainage area at each resolution was tested by pair wise comparison of three data subsets and showed differences of more than 50% in 25% of the evaluated points. The results show that consideration of DEM resolution is a necessity for the use of slope, drainage area and TWI data in large scale modelling.


2014 ◽  
Vol 21 (11) ◽  
pp. 1581-1588 ◽  
Author(s):  
Piotr Kardas ◽  
Mohammadreza Sadeghi ◽  
Fabian H. Weissbach ◽  
Tingting Chen ◽  
Lea Hedman ◽  
...  

ABSTRACTJC polyomavirus (JCPyV) can cause progressive multifocal leukoencephalopathy (PML), a debilitating, often fatal brain disease in immunocompromised patients. JCPyV-seropositive multiple sclerosis (MS) patients treated with natalizumab have a 2- to 10-fold increased risk of developing PML. Therefore, JCPyV serology has been recommended for PML risk stratification. However, different antibody tests may not be equivalent. To study intra- and interlaboratory variability, sera from 398 healthy blood donors were compared in 4 independent enzyme-linked immunoassay (ELISA) measurements generating >1,592 data points. Three data sets (Basel1, Basel2, and Basel3) used the same basic protocol but different JCPyV virus-like particle (VLP) preparations and introduced normalization to a reference serum. The data sets were also compared with an independent method using biotinylated VLPs (Helsinki1). VLP preadsorption reducing ≥35% activity was used to identify seropositive sera. The results indicated that Basel1, Basel2, Basel3, and Helsinki1 were similar regarding overall data distribution (P= 0.79) and seroprevalence (58.0, 54.5, 54.8, and 53.5%, respectively;P= 0.95). However, intra-assay intralaboratory comparison yielded 3.7% to 12% discordant results, most of which were close to the cutoff (0.080 < optical density [OD] < 0.250) according to Bland-Altman analysis. Introduction of normalization improved overall performance and reduced discordance. The interlaboratory interassay comparison between Basel3 and Helsinki1 revealed only 15 discordant results, 14 (93%) of which were close to the cutoff. Preadsorption identified specificities of 99.44% and 97.78% and sensitivities of 99.54% and 95.87% for Basel3 and Helsinki1, respectively. Thus, normalization to a preferably WHO-approved reference serum, duplicate testing, and preadsorption for samples around the cutoff may be necessary for reliable JCPyV serology and PML risk stratification.


Author(s):  
Weilun Zhou ◽  
Qinghua Deng ◽  
Wei He ◽  
Zhenping Feng

The laminated cooling, also known as impingement-effusion cooling, is believed to be a promising gas turbine blade cooling technique. In this paper, conjugate heat transfer analysis was employed to investigate the overall cooling effectiveness and total pressure loss of the laminated cooling configuration. The pitch to film hole diameter ratio P/Df of 3, 4, 5, 6, combined with pitch to impingement hole diameter ratio P/Di of 4, 6, 8, 10, are studied at the coolant mass flux G of 0.5, 1.0, 1.5, 2.0 kg/(sm2bar) respectively. The results show that overall cooling effectiveness of laminated cooling configuration increases with the decreasing of P/Df and the increasing of the coolant mass flux in general. However P/Df smaller than 3 may leads to a serious blocking in first few film holes at low coolant mass flux. The large P/Di that makes the Mach number of impingement flow greater than 0.16 may cause unacceptable pressure loss. The increment of overall cooling effectiveness depends on the difference between the deterioration of external cooling and the enhancement of internal cooling. Pressure loss increases exponentially with P/Di and G, and it increases more slowly with P/Df that compared to P/Di and G. The mixing loss takes up the most pressure loss at low coolant mass flux. With the increasing of the whole pressure loss, the proportion of throttling loss and laminated loss becomes larger and finally takes up the most of the whole pressure loss. When the sum of throttling loss and laminated loss is greater than mixing loss, the increment of system pressure ratio is unreasonable that compared to the increment of overall cooling effectiveness.


2018 ◽  
Vol 11 (2) ◽  
pp. 53-67
Author(s):  
Ajay Kumar ◽  
Shishir Kumar

Several initial center selection algorithms are proposed in the literature for numerical data, but the values of the categorical data are unordered so, these methods are not applicable to a categorical data set. This article investigates the initial center selection process for the categorical data and after that present a new support based initial center selection algorithm. The proposed algorithm measures the weight of unique data points of an attribute with the help of support and then integrates these weights along the rows, to get the support of every row. Further, a data object having the largest support is chosen as an initial center followed by finding other centers that are at the greatest distance from the initially selected center. The quality of the proposed algorithm is compared with the random initial center selection method, Cao's method, Wu method and the method introduced by Khan and Ahmad. Experimental analysis on real data sets shows the effectiveness of the proposed algorithm.


2018 ◽  
Vol 8 (2) ◽  
pp. 377-406
Author(s):  
Almog Lahav ◽  
Ronen Talmon ◽  
Yuval Kluger

Abstract A fundamental question in data analysis, machine learning and signal processing is how to compare between data points. The choice of the distance metric is specifically challenging for high-dimensional data sets, where the problem of meaningfulness is more prominent (e.g. the Euclidean distance between images). In this paper, we propose to exploit a property of high-dimensional data that is usually ignored, which is the structure stemming from the relationships between the coordinates. Specifically, we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples. When the observable samples are generated by a nonlinear transformation of hidden variables, the Mahalanobis distance allows the recovery of the Euclidean distances in the hidden space. We illustrate the advantage of our approach on a synthetic example where the discovery of clusters of correlated coordinates improves the estimation of the principal directions of the samples. Our method was applied to real data of gene expression for lung adenocarcinomas (lung cancer). By using the proposed metric we found a partition of subjects to risk groups with a good separation between their Kaplan–Meier survival plot.


2019 ◽  
Author(s):  
Benedikt Ley ◽  
Komal Raj Rijal ◽  
Jutta Marfurt ◽  
Nabaraj Adhikari ◽  
Megha Banjara ◽  
...  

Abstract Objective: Electronic data collection (EDC) has become a suitable alternative to paper based data collection (PBDC) in biomedical research even in resource poor settings. During a survey in Nepal, data were collected using both systems and data entry errors compared between both methods. Collected data were checked for completeness, values outside of realistic ranges, internal logic and date variables for reasonable time frames. Variables were grouped into 5 categories and the number of discordant entries were compared between both systems, overall and per variable category. Results: Data from 52 variables collected from 358 participants were available. Discrepancies between both data sets were found in 12.6% of all entries (2352/18,616). Differences between data points were identified in 18.0% (643/3,580) of continuous variables, 15.8% of time variables (113/716), 13.0% of date variables (140/1,074), 12.0% of text variables (86/716), and 10.9% of categorical variables (1,370/12,530). Overall 64% (1,499/2,352) of all discrepancies were due to data omissions, 76.6% (1,148/1,499) of missing entries were among categorical data. Omissions in PBDC (n=1002) were twice as frequent as in EDC (n=497, p<0.001). Data omissions, specifically among categorical variables were identified as the greatest source of error. If designed accordingly, EDC can address this short fall effectively.


Author(s):  
B. Piltz ◽  
S. Bayer ◽  
A. M. Poznanska

In this paper we propose a new algorithm for digital terrain (DTM) model reconstruction from very high spatial resolution digital surface models (DSMs). It represents a combination of multi-directional filtering with a new metric which we call &lt;i&gt;normalized volume above ground&lt;/i&gt; to create an above-ground mask containing buildings and elevated vegetation. This mask can be used to interpolate a ground-only DTM. The presented algorithm works fully automatically, requiring only the processing parameters &lt;i&gt;minimum height&lt;/i&gt; and &lt;i&gt;maximum width&lt;/i&gt; in metric units. Since slope and breaklines are not decisive criteria, low and smooth and even very extensive flat objects are recognized and masked. The algorithm was developed with the goal to generate the normalized DSM for automatic 3D building reconstruction and works reliably also in environments with distinct hillsides or terrace-shaped terrain where conventional methods would fail. A quantitative comparison with the ISPRS data sets &lt;i&gt;Potsdam&lt;/i&gt; and &lt;i&gt;Vaihingen&lt;/i&gt; show that 98-99% of all building data points are identified and can be removed, while enough ground data points (~66%) are kept to be able to reconstruct the ground surface. Additionally, we discuss the concept of &lt;i&gt;size dependent height thresholds&lt;/i&gt; and present an efficient scheme for pyramidal processing of data sets reducing time complexity to linear to the number of pixels, &lt;i&gt;O(WH)&lt;/i&gt;.


Sign in / Sign up

Export Citation Format

Share Document