scholarly journals On the compressibility effects in mixing layers

2016 ◽  
Vol 20 (5) ◽  
pp. 1473-1484
Author(s):  
Hechmi Khlifi ◽  
Taieb Lili

Previous studies of compressible flows carried out in the past few years have shown that the pressure-strain is the main indicator of the structural compressibility effects. Undoubtedly, this terms plays a key role toward strongly changing magnitude of the turbulent Reynolds stress anisotropy. On the other hand, the incompressible models of the pressure-strain correlation have not correctly predicted compressible turbulence at high speed shear flow. Consequently, a correction of these models is needed for precise prediction of compressibility effects. In the present work, a compressibility correction of the widely used incompressible Launder Reece and Rodi model making their standard coefficients dependent on the turbulent and convective Mach numbers is proposed. The ability of the model to predict the developed mixing layers in different cases from experiments of Goebel and Dutton is examined. The predicted results with the proposed model are compared with DNS and experimental data and those obtained by the compressible model of Adumitroiae et al. and the original LRR model. The results show that the essential compressibility effects on mixing layers are well captured by the proposed model.

Fluids ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 34
Author(s):  
Hechmi Khlifi ◽  
Adnen Bourehla

This work focuses on the performance and validation of compressible turbulence models for the pressure-strain correlation. Considering the Launder Reece and Rodi (LRR) incompressible model for the pressure-strain correlation, Adumitroaie et al., Huang et al., and Marzougui et al., used different modeling approaches to develop turbulence models, taking into account compressibility effects for this term. Two numerical coefficients are dependent on the turbulent Mach number, and all of the remaining coefficients conserve the same values as in the original LRR model. The models do not correctly predict the compressible turbulence at a high-speed shear flow. So, the revision of these models is the major aim of this study. In the present work, the compressible model for the pressure-strain correlation developed by Khlifi−Lili, involving the turbulent Mach number, the gradient, and the convective Mach numbers, is used to modify the linear mean shear strain and the slow terms of the previous models. The models are tested in two compressible turbulent flows: homogeneous shear flow and the newly developed plane mixing layers. The predicted results of the proposed modifications of the Adumitroaie et al., Huang et al., and Marzougui et al., models and of its universal versions are compared with direct numerical simulation (DNS) and experiment data. The results show that the important parameters of compressibility in homogeneous shear flow and in the mixing layers are well predicted by the proposal models.


1956 ◽  
Vol 60 (547) ◽  
pp. 459-475 ◽  
Author(s):  
E. G. Broadbent

SummaryA review is given of developments in the field of aeroelasticity during the past ten years. The effect of steadily increasing Mach number has been two-fold: on the one hand the aerodynamic derivatives have changed, and in some cases brought new problems, and on the other hand the design for higher Mach numbers has led to thinner aerofoils and more slender fuselages for which the required stiffness is more difficult to provide. Both these aspects are discussed, and various methods of attack on the problems are considered. The relative merits of stiffness, damping and massbalance for the prevention of control surface flutter are discussed. A brief mention is made of the recent problems of damage from jet efflux and of the possible aeroelastic effects of kinetic heating.


AIAA Journal ◽  
1994 ◽  
Vol 32 (7) ◽  
pp. 1531-1533 ◽  
Author(s):  
Rodney D. W. Bowersox ◽  
Joseph A. Schetz

2005 ◽  
Vol 128 (2) ◽  
pp. 284-296 ◽  
Author(s):  
Michael Dean Neaves ◽  
Jack R. Edwards

An algorithm based on the combination of time-derivative preconditioning strategies with low-diffusion upwinding methods is developed and applied to multiphase, compressible flows characteristic of underwater projectile motion. Multiphase compressible flows are assumed to be in kinematic and thermodynamic equilibrium and are modeled using a homogeneous mixture formulation. Compressibility effects in liquid-phase water are modeled using a temperature-adjusted Tait equation, and gaseous phases (water vapor and air) are treated as an ideal gas. The algorithm is applied to subsonic and supersonic projectiles in water, general multiphase shock tubes, and a high-speed water entry problem. Low-speed solutions are presented and compared to experimental results for validation. Solutions for high-subsonic and transonic projectile flows are compared to experimental imaging results and theoretical results. Results are also presented for several multiphase shock tube calculations. Finally, calculations are presented for a high-speed axisymmetric supercavitating projectile during the important water entry phase of flight.


Author(s):  
James A. Anderson

Hand axes, language, and computers are tools that increase our ability to deal with the world. Computing is a cognitive tool and comes in several kinds: digital, analog, and brain-like. An analog telephone connects two telephones with a wire. Talking causes a current to flow on the wire. In a digital telephone the voltage is converted into groups of ones or zeros and sent at high speed from one telephone to the other. An analog telephone requires one simple step. A digital telephone requires several million discrete steps per second. Digital telephones work because the hardware has gotten much faster. Yet brains constructed of slow devices and using a few watts of power are competitive for many cognitive tasks. The important question is not why machines are becoming so smart but why humans are still so good. Artificial intelligence is missing something important probably based on hardware differences.


1997 ◽  
Vol 14 (1) ◽  
pp. 5-13 ◽  
Author(s):  
Cheryl L. Sellers ◽  
Suresh Chandra

2019 ◽  
Vol 621 ◽  
pp. A31 ◽  
Author(s):  
Z. N. Khangale ◽  
S. B. Potter ◽  
E. J. Kotze ◽  
P. A. Woudt ◽  
H. Breytenbach

We present 33 new mid-eclipse times spanning approximately eight years of the eclipsing polar UZ Fornacis. We have used our new observations to test the two-planet model previously proposed to explain the variations in its eclipse times measured over the past ~35 yr. We find that the proposed model does indeed follow the general trend of the new eclipse times, however, there are significant departures. In order to accommodate the new eclipse times, the two-planet model requires that one or both of the planets require highly eccentric orbits, that is, e ≥ 0.4. Such multiple planet orbits are considered to be unstable. Whilst our new observations are consistent with two cyclic variations as previously predicted, significant residuals remain. We conclude that either additional cyclic terms, possibly associated with more planets, or other mechanisms, such as the Applegate mechanism are contributing to the eclipse time variations. Further long-term monitoring is required.


Author(s):  
K. T. Tokuyasu

During the past investigations of immunoferritin localization of intracellular antigens in ultrathin frozen sections, we found that the degree of negative staining required to delineate u1trastructural details was often too dense for the recognition of ferritin particles. The quality of positive staining of ultrathin frozen sections, on the other hand, has generally been far inferior to that attainable in conventional plastic embedded sections, particularly in the definition of membranes. As we discussed before, a main cause of this difficulty seemed to be the vulnerability of frozen sections to the damaging effects of air-water surface tension at the time of drying of the sections.Indeed, we found that the quality of positive staining is greatly improved when positively stained frozen sections are protected against the effects of surface tension by embedding them in thin layers of mechanically stable materials at the time of drying (unpublished).


Author(s):  
Prakash Rao

Image shifts in out-of-focus dark field images have been used in the past to determine, for example, epitaxial relationships in thin films. A recent extension of the use of dark field image shifts has been to out-of-focus images in conjunction with stereoviewing to produce an artificial stereo image effect. The technique, called through-focus dark field electron microscopy or 2-1/2D microscopy, basically involves obtaining two beam-tilted dark field images such that one is slightly over-focus and the other slightly under-focus, followed by examination of the two images through a conventional stereoviewer. The elevation differences so produced are usually unrelated to object positions in the thin foil and no specimen tilting is required.In order to produce this artificial stereo effect for the purpose of phase separation and identification, it is first necessary to select a region of the diffraction pattern containing more than just one discrete spot, with the objective aperture.


Author(s):  
William Krakow

In the past few years on-line digital television frame store devices coupled to computers have been employed to attempt to measure the microscope parameters of defocus and astigmatism. The ultimate goal of such tasks is to fully adjust the operating parameters of the microscope and obtain an optimum image for viewing in terms of its information content. The initial approach to this problem, for high resolution TEM imaging, was to obtain the power spectrum from the Fourier transform of an image, find the contrast transfer function oscillation maxima, and subsequently correct the image. This technique requires a fast computer, a direct memory access device and even an array processor to accomplish these tasks on limited size arrays in a few seconds per image. It is not clear that the power spectrum could be used for more than defocus correction since the correction of astigmatism is a formidable problem of pattern recognition.


Sign in / Sign up

Export Citation Format

Share Document