Evaluation of Total Emittance of an Isothermal Nongray Absorbing, Scattering Gas-Particle Mixture Based on the Concept of Absorption Mean Beam Length

1992 ◽  
Vol 114 (3) ◽  
pp. 653-658 ◽  
Author(s):  
W. W. Yuen ◽  
A. Ma

A general methodology to evaluate the total emittance of an isothermal, nongray, isotropically scattering particle-gas mixture is illustrated. Based on the concept of absorption mean beam length (AMBL), the methodology is demonstrated to be computationally efficient and accurate. As an illustration, the total emittance of a slab containing carbon particles and CO2 is evaluated. The nongray extinction coefficient and scattering albedo of carbon particles are calculated based on Mie theory and the available index of refraction data. The narrow-band fixed-line-spacing model (Edwards et al., 1967) is used to characterize the nongray spectral absorption coefficient of CO2. Numerical data show that the combined nongray and scattering effects are quite significant. For particles with moderate and large radius (say, ≥1 μm), ignoring the effect of scattering can lead to error in the prediction of total emittance by more than 20 percent. The no-scattering results also yield incorrect qualitative behavior of the total emittance in terms of its dependence on the mixture temperature and particle concentration. The accuracy of many of the existing predictions of total emittance of gas-particle mixtures that ignore the scattering effect is thus highly uncertain.

Author(s):  
Mario Leoni ◽  
Lee Frederickson ◽  
Fletcher Miller

A new experimental set-up has been introduced at San Diego State University’s Combustion and Solar Energy Lab to study the thermal oxidation characteristics of in-situ generated carbon particles in air at high pressure. The study is part of a project developing a Small Particle Heat Exchange Receiver (SPHER) utilizing concentrated solar power to run a Brayton cycle. The oxidation data obtained will further be used in different existing and planned computer models in order to accurately predict reactor temperatures and flow behavior in the SPHER. The carbon black particles were produced by thermal decomposition of natural gas at 1250 °C and a pressure of 5.65 bar (82 psi). Particles were analyzed using a Diesel Particle Scatterometer (DPS) and scanning electron microscopy (SEM) and found to have a 310 nm average diameter. The size distribution and the complex index of refraction were measured and the data were used to calculate the specific extinction cross section γ of the spherical particles. The oxidation rate was determined using 2 extinction tubes and a tube furnace and the values were compared to literature. The activation energy of the carbon particles was determined to be 295.02 kJ/mole which is higher than in comparable studies. However, the oxidation of carbon particles bigger than 100 nm is hardly studied and almost no previous data is available at these conditions.


2019 ◽  
Author(s):  
James Mathews ◽  
Saad Nadeem ◽  
Maryam Pouryahya ◽  
Zehor Belkhatir ◽  
Joseph O. Deasy ◽  
...  

AbstractWe present a framework based on optimal mass transport to construct, for a given network, a reduction hierarchy which can be used for interactive data exploration and community detection. Given a network and a set of numerical data samples for each node, we calculate a new computationally-efficient comparison metric between Gaussian Mixture Models, the Gaussian Mixture Transport distance, to determine a series of merge simplifications of the network. If only a network is given, numerical samples are synthesized from the network topology. The method has its basis in the local connection structure of the network, as well as the joint distribution of the data associated with neighboring nodes.The analysis is benchmarked on networks with known community structures. We also analyze gene regulatory networks, including the PANTHER curated database and networks inferred from the GTEx lung and breast tissue RNA profiles. Gene Ontology annotations from the EBI GOA database are ranked and superimposed to explain the salient gene modules. We find that several gene modules related to highly specific biological processes are well-coordinated in such tissues. We also find that 18 of the 50 genes of the PAM50 breast-tumor prognostic signature appear among the highly coordinated genes in a single gene module, in both the breast and lung samples. Moreover these 18 are precisely the subset of the PAM50 recently identified as the basal-like markers.


2021 ◽  
Vol 11 (1) ◽  
pp. 439
Author(s):  
Simon Ingelsten ◽  
Andreas Mark ◽  
Roland Kádár ◽  
Fredrik Edelvik

A new Lagrangian–Eulerian method for the simulation of viscoelastic free surface flow is proposed. The approach is developed from a method in which the constitutive equation for viscoelastic stress is solved at Lagrangian nodes, which are convected by the flow, and interpolated to the Eulerian grid with radial basis functions. In the new method, a backwards-tracking methodology is employed, allowing for fixed locations for the Lagrangian nodes to be chosen a priori. The proposed method is also extended to the simulation of viscoelastic free surface flow with the volume of fluid method. No unstructured interpolation or node redistribution is required with the new approach. Furthermore, the total amount of Lagrangian nodes is significantly reduced when compared to the original Lagrangian–Eulerian method. Consequently, the method is more computationally efficient and robust. No additional stabilization technique, such as both-sides diffusion or reformulation of the constitutive equation, is necessary. A validation is performed with the analytic solution for transient and steady planar Poiseuille flow, with excellent results. Furthermore, the proposed method agrees well with numerical data from the literature for the viscoelastic die swell flow of an Oldroyd-B model. The capabilities to simulate viscoelastic free surface flow are also demonstrated through the simulation of a jet buckling case.


2015 ◽  
Vol 96 (12) ◽  
pp. 2045-2057 ◽  
Author(s):  
Hongli Jiang ◽  
Steve Albers ◽  
Yuanfu Xie ◽  
Zoltan Toth ◽  
Isidora Jankov ◽  
...  

Abstract The accurate and timely depiction of the state of the atmosphere on multiple scales is critical to enhance forecaster situational awareness and to initialize very short-range numerical forecasts in support of nowcasting activities. The Local Analysis and Prediction System (LAPS) of the Earth System Research Laboratory (ESRL)/Global Systems Division (GSD) is a numerical data assimilation and forecast system designed to serve such very finescale applications. LAPS is used operationally by more than 20 national and international agencies, including the NWS, where it has been operational in the Advanced Weather Interactive Processing System (AWIPS) since 1995. Using computationally efficient and scientifically advanced methods such as a multigrid technique that adds observational information on progressively finer scales in successive iterations, GSD recently introduced a new, variational version of LAPS (vLAPS). Surface and 3D analyses generated by vLAPS were tested in the Hazardous Weather Testbed (HWT) to gauge their utility in both situational awareness and nowcasting applications. On a number of occasions, forecasters found that the vLAPS analyses and ensuing very short-range forecasts provided useful guidance for the development of severe weather events, including tornadic storms, while in some other cases the guidance was less sufficient.


Author(s):  
F. Hasselbach ◽  
A. Schäfer

Möllenstedt and Wohland proposed in 1980 two methods for measuring the coherence lengths of electron wave packets interferometrically by observing interference fringe contrast in dependence on the longitudinal shift of the wave packets. In both cases an electron beam is split by an electron optical biprism into two coherent wave packets, and subsequently both packets travel part of their way to the interference plane in regions of different electric potential, either in a Faraday cage (Fig. 1a) or in a Wien filter (crossed electric and magnetic fields, Fig. 1b). In the Faraday cage the phase and group velocity of the upper beam (Fig.1a) is retarded or accelerated according to the cage potential. In the Wien filter the group velocity of both beams varies with its excitation while the phase velocity remains unchanged. The phase of the electron wave is not affected at all in the compensated state of the Wien filter since the electron optical index of refraction in this state equals 1 inside and outside of the Wien filter.


Author(s):  
W.M. Stobbs

I do not have access to the abstracts of the first meeting of EMSA but at this, the 50th Anniversary meeting of the Electron Microscopy Society of America, I have an excuse to consider the historical origins of the approaches we take to the use of electron microscopy for the characterisation of materials. I have myself been actively involved in the use of TEM for the characterisation of heterogeneities for little more than half of that period. My own view is that it was between the 3rd International Meeting at London, and the 1956 Stockholm meeting, the first of the European series , that the foundations of the approaches we now take to the characterisation of a material using the TEM were laid down. (This was 10 years before I took dynamical theory to be etched in stone.) It was at the 1956 meeting that Menter showed lattice resolution images of sodium faujasite and Hirsch, Home and Whelan showed images of dislocations in the XlVth session on “metallography and other industrial applications”. I have always incidentally been delighted by the way the latter authors misinterpreted astonishingly clear thickness fringes in a beaten (”) foil of Al as being contrast due to “large strains”, an error which they corrected with admirable rapidity as the theory developed. At the London meeting the research described covered a broad range of approaches, including many that are only now being rediscovered as worth further effort: however such is the power of “the image” to persuade that the above two papers set trends which influence, perhaps too strongly, the approaches we take now. Menter was clear that the way the planes in his image tended to be curved was associated with the imaging conditions rather than with lattice strains, and yet it now seems to be common practice to assume that the dots in an “atomic resolution image” can faithfully represent the variations in atomic spacing at a localised defect. Even when the more reasonable approach is taken of matching the image details with a computed simulation for an assumed model, the non-uniqueness of the interpreted fit seems to be rather rarely appreciated. Hirsch et al., on the other hand, made a point of using their images to get numerical data on characteristics of the specimen they examined, such as its dislocation density, which would not be expected to be influenced by uncertainties in the contrast. Nonetheless the trends were set with microscope manufacturers producing higher and higher resolution microscopes, while the blind faith of the users in the image produced as being a near directly interpretable representation of reality seems to have increased rather than been generally questioned. But if we want to test structural models we need numbers and it is the analogue to digital conversion of the information in the image which is required.


Author(s):  
W. E. Lee

An optical waveguide consists of a several-micron wide channel with a slightly different index of refraction than the host substrate; light can be trapped in the channel by total internal reflection.Optical waveguides can be formed from single-crystal LiNbO3 using the proton exhange technique. In this technique, polished specimens are masked with polycrystal1ine chromium in such a way as to leave 3-13 μm wide channels. These are held in benzoic acid at 249°C for 5 minutes allowing protons to exchange for lithium ions within the channels causing an increase in the refractive index of the channel and creating the waveguide. Unfortunately, optical measurements often reveal a loss in waveguiding ability up to several weeks after exchange.


Author(s):  
B. Lencova ◽  
G. Wisselink

Recent progress in computer technology enables the calculation of lens fields and focal properties on commonly available computers such as IBM ATs. If we add to this the use of graphics, we greatly increase the applicability of design programs for electron lenses. Most programs for field computation are based on the finite element method (FEM). They are written in Fortran 77, so that they are easily transferred from PCs to larger machines.The design process has recently been made significantly more user friendly by adding input programs written in Turbo Pascal, which allows a flexible implementation of computer graphics. The input programs have not only menu driven input and modification of numerical data, but also graphics editing of the data. The input programs create files which are subsequently read by the Fortran programs. From the main menu of our magnetic lens design program, further options are chosen by using function keys or numbers. Some options (lens initialization and setting, fine mesh, current densities, etc.) open other menus where computation parameters can be set or numerical data can be entered with the help of a simple line editor. The "draw lens" option enables graphical editing of the mesh - see fig. I. The geometry of the electron lens is specified in terms of coordinates and indices of a coarse quadrilateral mesh. In this mesh, the fine mesh with smoothly changing step size is calculated by an automeshing procedure. The options shown in fig. 1 allow modification of the number of coarse mesh lines, change of coordinates of mesh points or lines, and specification of lens parts. Interactive and graphical modification of the fine mesh can be called from the fine mesh menu. Finally, the lens computation can be called. Our FEM program allows up to 8000 mesh points on an AT computer. Another menu allows the display of computed results stored in output files and graphical display of axial flux density, flux density in magnetic parts, and the flux lines in magnetic lenses - see fig. 2. A series of several lens excitations with user specified or default magnetization curves can be calculated and displayed in one session.


2001 ◽  
Vol 7 (S2) ◽  
pp. 148-149
Author(s):  
C.D. Poweleit ◽  
J Menéndez

Oil immersion lenses have been used in optical microscopy for a long time. The light’s wavelength is decreased by the oil’s index of refraction n and this reduces the minimum spot size. Additionally, the oil medium allows a larger collection angle, thereby increasing the numerical aperture. The SIL is based on the same principle, but offers more flexibility because the higher index material is solid. in particular, SILs can be deployed in cryogenic environments. Using a hemispherical glass the spatial resolution is improved by a factor n with respect to the resolution obtained with the microscope’s objective lens alone. The improvement factor is equal to n2 for truncated spheres.As shown in Fig. 1, the hemisphere SIL is in contact with the sample and does not affect the position of the focal plane. The focused rays from the objective strike the lens at normal incidence, so that no refraction takes place.


Sign in / Sign up

Export Citation Format

Share Document