scholarly journals Local measurements and cosmological background

1986 ◽  
Vol 114 ◽  
pp. 189-189
Author(s):  
P. Teyssandier ◽  
Ph. Tourrenc

The relations between the local measurements achieved in the solar system for testing the metric theories of gravity and the cosmological background are far from being clear. In most cases, some heuristic assumptions are made in order to take into account cosmological boundary conditions. In particular, the light rays from distant stars or extragalactic objects are often believed on the basis of the so-called Mach principle to determine “fixed” directions defining inertial frames. However, it has been already shown in theoretical cosmology that in any anisotropic, inhomogeneous cosmological model, the apparent directions of a distant object varies with respect to locally non rotating inertial frames. Restricting in the first step our attention to some exact anisotropic models obeying to the Einstein equations, we study the order of magnitude of this effect in the context of observational devices such as the HIPPARCOS satellite, the Space Telescope and the gyroscope experiment planned at the Stanford University. Then we try to interpret the effect and more generally the various influences of cosmological terms, within the framework of the parametrized post-newtonian formalism.

2021 ◽  
Author(s):  
Qing-Yun Di ◽  
Olalekan Fayemi ◽  
Qi-Hui Zhen ◽  
Tian Fei

AbstractAn axisymmetric finite difference method is employed for the simulations of electromagnetic telemetry in the homogeneous and layered underground formation. In this method, we defined the anisotropy property using extensive 2D conductivity tensor and solved it in the transverse magnetic mode. Significant simplification arises in the decoupling of the anisotropic parameter. The developed method is cost-efficient, more straightforward in modeling anisotropic media, and easy to be implemented. In addition, we solved the integral operation in the estimation of measured surface voltage using Gaussian quadrature technique. We performed a series of numerical modeling of EM telemetry signals in both isotropic and anisotropic models. Experiment with 2D tilt transverse isotropic media characterized by the tilt axis and anisotropy parameters shows an increase in the EMT signal with an increase in the angle of tilt of the principal axis for a moderate coefficient of anisotropy. We show that the effect of the tilt of the subsurface medium can be observed with sufficient accuracy and that it is an order of magnitude of 5 over the tilt of 90 degrees. Lastly, consistent results with existing field data were obtained by employing the Gaussian quadrature rule for the computation of surface measured signal.


1992 ◽  
Vol 128 ◽  
pp. 225-227
Author(s):  
R. C. Kapoor

AbstractAn estimate of the effect of light bending and redshift on pulsar beam characteristics has been made using a weak Kerr metric for the case of a 1.4 M⊙ neutron star with a radius in the range 6-10 km and rotation periods of 1.56 ms and 33 ms, respectively. Assuming that the pulsar emission has the form of a narrow conical beam directed away from the surface and is located within two stellar radii, the beam is found to be widened by a factor of ≤ 2 and to suffer a reduction in the intensity (flattening of the profile) by an order of magnitude or less. The effect is largest for the most rapidly rotating the neutron stars. For an emission region located beyond 20 km, the flattening is generally insignificant. The pulse profile is slightly asymmetrical due to dragging of the inertial frames. For millisecond periods, aberration tends to reverse the flattening effect of space-time curvature by narrowing the pulse and can completely overcome it for emission from a location beyond ≃30km. Although the pulse must slightly brighten up, a large redshift factor overcomes this effect to keep the pulse flattened for all neutron star radii considered here.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE5-VE11 ◽  
Author(s):  
Marta Jo Woodward ◽  
Dave Nichols ◽  
Olga Zdraveva ◽  
Phil Whitfield ◽  
Tony Johns

Over the past 10 years, ray-based postmigration grid tomography has become the standard model-building tool for seismic depth imaging. While the basics of the method have remained unchanged since the late 1990s, the problems it solves have changed dramatically. This evolution has been driven by exploration demands and enabled by computer power. There are three main areas of change. First, standard model resolution has increased from a few thousand meters to a few hundred meters. This order of magnitude improvement may be attributed to both high-quality, complex residual-moveout data picked as densely as [Formula: see text] to [Formula: see text] vertically and horizontally, and to a strategy of working down from long-wavelength to short-wavelength solutions. Second, more and more seismic data sets are being acquired along multiple azimuths, for improved illumination and multiple suppression. High-resolution velocity tomography must solve for all azimuths simultaneously, to prevent short-wavelength velocity heterogeneity from being mistaken for azimuthal anisotropy. Third, there has been a shift from predominantly isotropic to predominantly anisotropic models, both VTI and TTI. With four-component data, anisotropic grid tomography can be used to build models that tie PZ and PS images in depth.


Author(s):  
Michael A. Persinger ◽  
Stanley A. Koren

The aggregate of m7·s-1 from the product of the four geometric terms for increasing dimensions of a closed path (a circle) when set equal to the optimal combinations of the gravitational constant G and the universe’s mass, length and time results in a diffusivity term of 1023 m·s-1. Conversion of the total energy of the universe to volts per meter and Tesla results in a velocity of the same order of magnitude. The required f6 multiplication to balance the terms solves optimally for a frequency that when divided by the modified Planck’s value is the equivalent upper limit of the rest mass of a photon. Several experimental times associated with orbital distances for inertial frames are consistent with this velocity. Calculations indicate that during the final epoch the velocity from the energy derived from universal potential difference over length and magnetic fields will require only a unit frequency adjustment that corresponds to the energy equivalent of one orbit of a Bohr electron. We suggest that one intrinsic process by which large scale structures (Gigaparsec) are organized could involve this “entanglement velocity”. It would be correlated with the transformation of “virtual” or subthreshold values of the upper rest mass of photons to their energetic manifestation as the universe emerges from dark energy or matter that is yet to appear.


2021 ◽  
Author(s):  
Solomon Vimal ◽  
Vijay P. Singh

Abstract. Evaporation from open water is among the most rigorously studied problems in hydrology. Robert E. Horton, unbeknownst to most investigators on the subject, studied it in great detail by conducting experiments and heuristically relating his observations to physical laws. His work furthered known theories of lake evaporation, but it appears that it got dismissed as simply empirical. This is unfortunate, because Horton’s century-old insights on the topic, which we summarize here, seem relevant for contemporary climate change-era problems. In re-discovering his overlooked lake evaporation works, in this paper we: 1) examine his several publications in the period 1915–1944 and identify his theory sources for evaporation physics among scientists of the late 1800s; 2) illustrate his lake evaporation formulae which require several equations, tables, thresholds, and conditions based on physical factors and assumptions; and 3) assess his evaporation results over continental U.S., and analyse the performance of his formula in a subarctic Canadian catchment by comparing it with five other calibrated (aerodynamic and mass transfer) evaporation formulae of varying complexity. We find that Horton’s method, due to its unique variable vapor pressure deficit (VVPD) term, outperforms all other methods by ~ 3–15 % of R2 consistently across timescales (days to months), and an order of magnitude higher at sub-daily scales (we assessed up to 30 mins). Surprisingly, when his method uses input vapor pressure disaggregated from reanalysis data, it still outperforms other methods which use local measurements. This indicates that the vapor pressure deficit (VPD) term currently used in all other evaporation methods is not as good an independent control for lake evaporation as Horton's VVPD. Therefore, Horton's evaporation formula is held to be a major improvement in lake evaporation theory which, in part, may: A) supplant or improve existing evaporation formulae including the aerodynamic part of the combination (Penman) method; B) point to new directions in lake evaporation physics as it leads to a "constant" and a non-dimensional ratio – the former is due to him, John Dalton (1802), and Gustav Schübler (1831), and the latter to him and Josef Stefan (1881); C) offer better insights behind the physics of the evaporation paradox (i.e. globally, decreasing trends in pan evaporation are unanimously observed, while the opposite is expected due to global warming). Curiously, his rare observations of convective vapor plumes from lakes may also help explain the mythical origins of Greek deity Venus and the dancing Nereids.


1997 ◽  
Vol 161 ◽  
pp. 611-621
Author(s):  
Guillermo A. Lemarchand ◽  
Fernando R. Colomb ◽  
E. Eduardo Hurrell ◽  
Juan Carlos Olalde

AbstractProject META II, a full sky survey for artificial narrow-band signals, has been conducted from one of the two 30-m radiotelescopes of the Instituto Argentino de Radioastronomía (IAR). The search was performed near the 1420 Mhz line of neutral hydrogen, using a 8.4 million channels Fourier spectrometer of 0.05 Hz resolution and 400 kHz instantaneous bandwidth. The observing frequency was corrected both for motions with respect to three astronomical inertial frames, and for the effect of Earths rotation, which provides a characteristic changing signature for narrow-band signals of extraterrestrial origin. Among the 2 × 1013spectral channels analyzed, 29 extra-statistical narrow-band events were found, exceeding the average threshold of 1.7 × 10−23Wm−2. The strongest signals that survive culling for terrestrial interference lie in or near the galactic plane. A description of the project META II observing scheme and results is made as well as the possible interpretation of the results using the Cordes-Lazio-Sagan model based in interstellar scattering theory.


Author(s):  
W. J. Abramson ◽  
H. W. Estry ◽  
L. F. Allard

LaB6 emitters are becoming increasingly popular as direct replacements for tungsten filaments in the electron guns of modern electron-beam instruments. These emitters offer order of magnitude increases in beam brightness, and, with appropriate care in operation, a corresponding increase in source lifetime. They are, however, an order of magnitude more expensive, and may be easily damaged (by improper vacuum conditions and thermal shock) during saturation/desaturation operations. These operations typically require several minutes of an operator's attention, which becomes tedious and subject to error, particularly since the emitter must be cooled during sample exchanges to minimize damage from random vacuum excursions. We have designed a control system for LaBg emitters which relieves the operator of the necessity for manually controlling the emitter power, minimizes the danger of accidental improper operation, and makes the use of these emitters routine on multi-user instruments.Figure 1 is a block schematic of the main components of the control system, and Figure 2 shows the control box.


Author(s):  
Takao Suzuki ◽  
Hossein Nuri

For future high density magneto-optical recording materials, a Bi-substituted garnet film ((BiDy)3(FeGa)5O12) is an attractive candidate since it has strong magneto-optic effect at short wavelengths less than 600 nm. The signal in read back performance at 500 nm using a garnet film can be an order of magnitude higher than a current rare earth-transition metal amorphous film. However, the granularity and surface roughness of such crystalline garnet films are the key to control for minimizing media noise.We have demonstrated a new technique to fabricate a garnet film which has much smaller grain size and smoother surfaces than those annealed in a conventional oven. This method employs a high ramp-up rate annealing (Γ = 50 ~ 100 C/s) in nitrogen atmosphere. Fig.1 shows a typical microstruture of a Bi-susbtituted garnet film deposited by r.f. sputtering and then subsequently crystallized by a rapid thermal annealing technique at Γ = 50 C/s at 650 °C for 2 min. The structure is a single phase of garnet, and a grain size is about 300A.


Author(s):  
William Krakow

In recent years electron microscopy has been used to image surfaces in both the transmission and reflection modes by many research groups. Some of this work has been performed under ultra high vacuum conditions (UHV) and apparent surface reconstructions observed. The level of resolution generally has been at least an order of magnitude worse than is necessary to visualize atoms directly and therefore the detailed atomic rearrangements of the surface are not known. The present author has achieved atomic level resolution under normal vacuum conditions of various Au surfaces. Unfortunately these samples were exposed to atmosphere and could not be cleaned in a standard high resolution electron microscope. The result obtained surfaces which were impurity stabilized and reveal the bulk lattice (1x1) type surface structures also encountered by other surface physics techniques under impure or overlayer contaminant conditions. It was therefore decided to study a system where exposure to air was unimportant by using a oxygen saturated structure, Ag2O, and seeking to find surface reconstructions, which will now be described.


Sign in / Sign up

Export Citation Format

Share Document