Conservative Interpretation of Nonconservative Discrete Ordinates Radiative Intensity Distribution

Author(s):  
Kihwan Kim ◽  
Seok Hun Kang ◽  
Tae-Ho Song

The discrete ordinates interpolation method (DOIM) and the finite element discrete ordinates method (FEDOM) show good accuracy and versatility for calculation of radiative intensity. However, these methods are nonconservative since the intensity is computed only at grid points without considering control volume. When these methods are to be used together with a finite volume-based code for fluid flow and transport analysis, intensity at the center of control volume or surface, whichever is missing, needs to be calculated, and the control volume photon balance should be evaluated. For this reason, the method of satisfying control volume photon balance, without sacrificing the accuracy, is critically discussed first. Based on this rationale, the supplementary DOIM (SDOIM) is proposed to calculate the missing intensity. In addition, the integration method of RTE (IMRTE), used in DOM or FVM to satisfy the control volume photon balance, and linear interpolation method (LIM) are also examined to compare with the SDOIM. The accuracy, physical reliability and smoothness of the intensity obtained by using the three methods are carefully analyzed. Application of the SDOIM shows reliable results which are accurate and free from physically unrealistic intensity distribution.

Author(s):  
Junsang Yoo ◽  
Taeyong Lee ◽  
Pyungsik Go ◽  
Yongseok Cho ◽  
Kwangsoon Choi ◽  
...  

In the American continent, the most frequently used alternative fuel is ethanol. Especially in Brazil, various blends of gasoline–ethanol fuels are widely spread. The vehicle using blended fuel is called flexible fuel vehicle. Because of several selections for the blending ratios in gas stations, the fuel properties may vary after refueling depending on a driver’s selection. Also, the combustion characteristics of the flexible fuel vehicle engine may change. In order to respond to the flexible fuel vehicle market in Brazil, a study on blended fuels is performed. The main purpose of this study is to enhance performance of the flexible fuel vehicle engine to target Brazilian market. Therefore, we investigated combustion characteristics and optimal spark timings of the blended fuels with various blending ratios to improve the performance of the flexible fuel vehicle engine. As a tool for prediction of the optimal spark timing for the 1.6L flexible fuel vehicle engine, the empirical equation was suggested. The validity of the equation was investigated by comparing the predicted optimal spark timings with the stock spark timings through engine tests. When the stock spark timings of E0 and E100 were optimal, the empirical equation predicted the actual optimal spark timings for blended fuels with a good accuracy. In all conditions, by optimizing spark timing control, performance was improved. Especially, torque improvements of E30 and E50 fuels were 5.4% and 1.8%, respectively, without affecting combustion stability. From these results, it was concluded that the linear interpolation method is not suitable for flexible fuel vehicle engine control. Instead of linear interpolation method, optimal spark timing which reflects specific octane numbers of gasoline–ethanol blended fuels should be applied to maximize performance of the flexible fuel vehicle engine. The results of this study are expected to save the effort required for engine calibration when developing new flexible fuel vehicle engines and to be used as a basic strategy to improve the performance of other flexible fuel vehicle engines.


2002 ◽  
Vol 82 (1) ◽  
pp. 64-78 ◽  
Author(s):  
Sari Metsämäki ◽  
Jenni Vepsäläinen ◽  
Jouni Pulliainen ◽  
Yrjö Sucksdorff

2012 ◽  
Vol 588-589 ◽  
pp. 1312-1315
Author(s):  
Yi Kun Zhang ◽  
Ming Hui Zhang ◽  
Xin Hong Hei ◽  
Deng Xin Hua ◽  
Hao Chen

Aiming at building a Lidar data interpolation model, this paper designs and implements a GA-BP interpolation method. The proposed method uses genetic method to optimize BP neural network, which greatly improves the calculation accuracy and convergence rate of BP neural network. Experimental results show that the proposed method has a higher interpolation accuracy compared with BP neural network as well as linear interpolation method.


2019 ◽  
Vol 99 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Rezvan Taki ◽  
Claudia Wagner-Riddle ◽  
Gary Parkin ◽  
Rob Gordon ◽  
Andrew VanderZaag

Micrometeorological methods are ideally suited for continuous measurements of N2O fluxes, but gaps in the time series occur due to low-turbulence conditions, power failures, and adverse weather conditions. Two gap-filling methods including linear interpolation and artificial neural networks (ANN) were utilized to reconstruct missing N2O flux data from a corn–soybean–wheat rotation and evaluate the impact on annual N2O emissions from 2001 to 2006 at the Elora Research Station, ON, Canada. The single-year ANN method is recommended because this method captured flux variability better than the linear interpolation method (average R2 of 0.41 vs. 0.34). Annual N2O emission and annual bias resulting from linear and single-year ANN were compatible with each other when there were few and short gaps (i.e., percentage of missing values <30%). However, with longer gaps (>20 d), the bias error in annual fluxes varied between 0.082 and 0.344 kg N2O-N ha−1 for linear and 0.069 and 0.109 kg N2O-N ha−1 for single-year ANN. Hence, the single-year ANN with lower annual bias and stable approach over various years is recommended, if the appropriate driving inputs (i.e., soil temperature, soil water content, precipitation, N mineral content, and snow depth) needed for the ANN model are available.


1997 ◽  
Vol 40 (1) ◽  
Author(s):  
E. Le Meur ◽  
J. Virieux ◽  
P. Podvin

At a local scale, travel-time tomography requires a simultaneous inversion of earthquake positions and velocity structure. We applied a joint iterative inversion scheme where medium parameters and hypocenter parameters were inverted simultaneously. At each step of the inversion, rays between hypocenters and stations were traced, new partial derivatives of travel-time were estimated and scaling between parameters was performed as well. The large sparse linear system modified by the scaling was solved by the LSQR method at each iteration. We compared performances of two different forward techniques. Our first approach was a fast ray tracing based on a paraxial method to solve the two-point boundary value problem. The rays connect sources and stations in a velocity structure described by a 3D B-spline interpolation over a regular grid. The second approach is the finite-difference solution of the eikonal equation with a 3D linear interpolation over a regular grid. The partial derivatives are estimated differently depending on the interpolation method. The reconstructed images are sensitive to the spatial variation of the partial derivatives shown by synthetic examples. We aldo found that a scaling between velocity and hypocenter parameters involved in the linear system to be solved is important in recovering accurate amplitudes of anomalies. This scaling was estimated to be five through synthetic examples with the real configuration of stations and sources. We also found it necessary to scale Pand S velocities in order to recover better amplitudes of S velocity anomaly. The crustal velocity structure of a 50X50X20 km domain near Patras in the Gulf of Corinth (Greece) was recovered using microearthquake data. These data were recorded during a field experiment in 1991 where a dense network of 60 digital stations was deployed. These microearthquakes were widely distributed under the Gulf of Corinth and enabled us to perform a reliable tomography of first arrival P and S travel-times. The obtained images of this seismically active zone show a south/north asymmetry in agreement with the tectonic context. The transition to high velocity lies between 6 km and 9 km indicating a very thin crust related to the active extension regime.At a local scale, travel-time tomography requires a simultaneous inversion of earthquake positions and velocity structure. We applied a joint iterative inversion scheme where medium parameters and hypocenter parameters were inverted simultaneously. At each step of the inversion, rays between hypocenters and stations were traced, new partial derivatives of travel-time were estimated and scaling between parameters was performed as well. The large sparse linear system modified by the scaling was solved by the LSQR method at each iteration. We compared performances of two different forward techniques. Our first approach was a fast ray tracing based on a paraxial method to solve the two-point boundary value problem. The rays connect sources and stations in a velocity structure described by a 3D B-spline interpolation over a regular grid. The second approach is the finite-difference solution of the eikonal equation with a 3D linear interpolation over a regular grid. The partial derivatives are estimated differently depending on the interpolation method. The reconstructed images are sensitive to the spatial variation of the partial derivatives shown by synthetic examples. We aldo found that a scaling between velocity and hypocenter parameters involved in the linear system to be solved is important in recovering accurate amplitudes of anomalies. This scaling was estimated to be five through synthetic examples with the real configuration of stations and sources. We also found it necessary to scale Pand S velocities in order to recover better amplitudes of S velocity anomaly. The crustal velocity structure of a 50X50X20 km domain near Patras in the Gulf of Corinth (Greece) was recovered using microearthquake data. These data were recorded during a field experiment in 1991 where a dense network of 60 digital stations was deployed. These microearthquakes were widely distributed under the Gulf of Corinth and enabled us to perform a reliable tomography of first arrival P and S travel-times. The obtained images of this seismically active zone show a south/north asymmetry in agreement with the tectonic context. The transition to high velocity lies between 6 km and 9 km indicating a very thin crust related to the active extension regime.


Author(s):  
H. L. Zhang ◽  
H. Zhao ◽  
Y. P. Liu ◽  
X. K. Wang ◽  
C. Shu

Abstract. For a long time, the research of the optical properties of atmospheric aerosols has aroused a wide concern in the field of atmospheric and environmental. Many scholars commonly use the Klett method to invert the lidar return signal of Mie scattering. However, there are always some negative values in the detection data of lidar, which have no actual meaning,and which are jump points. The jump points are also called wild value points and abnormal points. The jump points are refered to the detecting points that are significantly different from the surrounding detection points, and which are not consistent with the actual situation. As a result, when the far end point is selected as the boundary value, the inversion error is too large to successfully invert the extinction coefficient profile. These negative points are jump points, which must be removed in the inversion process. In order to solve the problem, a method of processing jump points of detection data of lidar and the inversion method of aerosol extinction coefficient is proposed in this paper. In this method, when there are few jump points, the linear interpolation method is used to process the jump points. When the number of continuous jump points is large, the function fitting method is used to process the jump points. The feasibility and reliability of this method are verified by using actual lidar data. The results show that the extinction coefficient profile can be successfully inverted when different remote boundary values are chosen. The extinction coefficient profile inverted by this method is more continuous and smoother. The effective detection range of lidar is greatly increased using this method. The extinction coefficient profile is more realistic. The extinction coefficient profile inverted by this method is more favorable to further analysis of the properties of atmospheric aerosol. Therefore, this method has great practical application and popularization value.


2020 ◽  
Author(s):  
Alessandro Fassò ◽  
Michael Sommer ◽  
Christoph von Rohden

Abstract. This paper is motivated by the fact that, although temperature readings made by Vaisala RS41 radiosondes at GRUAN sites (http://www.gruan.org) are given at 1 s resolution, for various reasons, missing data are spread along the atmospheric profile. Such a problem is quite common in radiosonde data and other profile data. Hence, (linear) interpolation is often used to fill the gaps in published data products. In this perspective, the present paper considers interpolation uncertainty. To do this, a statistical approach is introduced giving some understanding of the consequences of substituting missing data by interpolated ones. In particular, a general frame for the computation of interpolation uncertainty based on a Gaussian process (GP) set-up is developed. Using the GP characteristics, a simple formula for computing the linear interpolation standard error is given. Moreover, the GP interpolation is proposed as it provides an alternative interpolation method with its standard error. For the Vaisala RS41, the two approaches are shown to give similar interpolation performances using an extensive cross-validation approach based on the block-bootstrap technique. Statistical results about interpolation uncertainties at various GRUAN sites and for various missing gap lengths are provided. Since both provide an underestimation of the cross-validation interpolation uncertainty, a bootstrap-based correction formula is proposed. Using the root mean square error, it is found that, for short gaps, with an average length of 5 s, the average uncertainty is smaller than 0.10 K. For larger gaps, it increases up to 0.35 K for an average gap length of 30 s, and up to 0.58 K for a gap of 60 s.


Author(s):  
Shunbo Lei ◽  
Johanna Mathieu ◽  
Rishee Jain

Abstract Commercial buildings generally have large thermal inertia, and thus can provide services to power grids (e.g., demand response (DR)) by modulating their Heating, Ventilation, and Air Conditioning (HVAC) systems. Shifting consumption on timescales of minutes to an hour can be accomplished through temperature setpoint adjustments that affect HVAC fan consumption. Estimating the counterfactual baseline power consumption of HVAC fans is challenging but is critical for assessing the capacity and participation of DR from HVAC fans in grid-interactive efficient buildings (GEBs). DR baseline methods have been developed for whole-building power profiles. This work evaluates those methods on total HVAC fan power profiles, which have different characteristics than whole-building power profiles. Specifically, we assess averaging methods (e.g., Y-day average, HighXofY, and MidXofY, with and without additive adjustments), which are the most commonly used in practice, and a least squares-based linear interpolation method recently developed for baselining HVAC fan power. We use empirical submetering data from HVAC fans in three University of Michigan buildings in our assessment. We find that the linear interpolation method has a low bias and by far the highest accuracy, indicating that it is potentially the most effective existing baseline method for quantifying the effects of short-term load shifting of HVAC fans. Overall, our results provide new insights on the applicability of existing DR baseline methods to baselining fan power and enable more widespread contribution of GEBs to DR and other grid services.


Circulation ◽  
2021 ◽  
Vol 144 (Suppl_2) ◽  
Author(s):  
Kevin M Wheelock ◽  
Lian Chen ◽  
Saket Girotra ◽  
Paul S Chan ◽  
Rohan Khera

Introduction: For comatose survivors of out-of-hospital cardiac arrest (OHCA), targeted temperature management (TTM) is strongly recommended with a goal temperature of 32-36C for a period of at least 24 hours. However, adherence to this target in clinical practice remains unknown. We developed time-in-therapeutic range (TTR) as a treatment metric for patients receiving TTM and evaluated patient- and site-level variation in TTR. Methods: We used data from the Resuscitation Outcomes Consortium-CCC trial which included patients with OHCA across 10 North American sites during 2011-2015. We identified patients who underwent TTM for >12 hours. Serial temperature measures were evaluated between hypothermia start and end times with temperatures between consecutive measures imputed using a linear interpolation method. TTR was defined as percent of time between 32C and 36C during TTM (Fig A). Site was defined based on trial clusters, which represented hospitals served by the same EMS agency. Site-level variation in TTR<90% was evaluated in hierarchical logistic regression using median odds ratio (OR), after adjustment for patient-level factors. Results: A total of 2,695 patients across 49 clusters were included with a median of 45 (IQR: 34 - 52) patients per cluster. The median duration of hypothermia was 23 (IQR: 21 - 24) hours with a median time outside therapeutic range of 0.9 (IQR: 0.0 - 4.2) hours. The median TTR was 96.1% but 1,654 (61%) patients had at least one temperature outside the therapeutic range and 991 (37%) patients had a TTR <90%. There was large variation across sites in the proportion of patients with TTR<90%, ranging from 10% to 68%, with a median OR of 1.74 (Fig B). Conclusions: Within a large randomized controlled trial, more than 1 in 3 OHCA patients treated with TTM had a TTR <90%, with large variation in TTR across sites. These findings highlight an urgent need to focus on improving quality of TTM in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document