Bimodal Plasma Sheet Flow

Author(s):  
Charles F. Kennel

How does the plasma sheet respond to the complex pattern of waves coming over the poles from bursty magnetopause reconnection events, or to the vortices and other irregular perturbations coming around the flanks of the magnetosphere in the low-latitude boundary layer? It is probably too much to expect that the complex input from the dayside will sort itself out into a steady flow on the nightside, but there has been a seductive hope that, on a statistical basis, the observations of the plasma sheet could be rationalized using steady convection thinking. This hope depends on the belief that the average magnetic field configuration in the plasma sheet actually is compatible with steady convection. The first doubts on this score were raised by Erickson and Wolf (1980), and were subsequently elaborated by Tsyganenko (1982), Birn and Schindler (1983), and Liu and Hill (1985); the“plasma sheet pressure paradox” they posed is the subject of Section 9.2. Theoretical arguments are one thing, measurements are another; the truly important issue is whether the real plasma sheet manifests steady flow. Several groups have searched large data sets to see whether the statistically averaged flow in the central plasma sheet resembles the flow predicted by the steady convection model. This effort has led to a growing but still incomplete comprehension of the statistical properties of plasma sheet transport. Results obtained using ensembles of data acquired by ISEE 1 and AMPTE/IRM will be reviewed in Section 9.3. The unusual distribution of bulk flow velocities suggests that the plasma sheet flow is bimodal, alternating between a predominant irregular low-speed state and an infrequently occurring state of high-speed earthward flow. In search of steady plasma sheet flow, one could also look into substormfree periods of stable solar wind properties. One of the best such studies, in which great care was taken to find periods of exceptionally stable solar wind and geomagnetic conditions, is reviewed in Section 9.4. Even this study found highly irregular and bursty flow.

2006 ◽  
Vol 22 (8) ◽  
pp. 1004-1010 ◽  
Author(s):  
Andrei Hutanu ◽  
Gabrielle Allen ◽  
Stephen D. Beck ◽  
Petr Holub ◽  
Hartmut Kaiser ◽  
...  

1999 ◽  
Vol 17 (12) ◽  
pp. 1602-1610 ◽  
Author(s):  
R. Nakamura ◽  
G. Haerendel ◽  
W. Baumjohann ◽  
A. Vaivads ◽  
H. Kucharek ◽  
...  

Abstract. Data from Equator-S and Geotail are used to study the dynamics of the plasma sheet observed during a substorm with multiple intensifications on 25 April 1998, when both spacecraft were located in the early morning sector (03–04 MLT) at a radial distance of 10–11 RE. In association with the onset of a poleward expansion of the aurora and the westward electrojet in the premidnight and midnight sector, both satellites in the morning sector observed plasma sheet thinning and changes toward a more tail-like field configuration. During the subsequent poleward expansion in a wider local time sector (20–04 MLT), on the other hand, the magnetic field configuration at both satellites changed into a more dipolar configuration and both satellites encountered again the hot plasma sheet. High-speed plasma flows with velocities of up to 600 km/s and lasting 2–5 min were observed in the plasma sheet and near its boundary during this plasma sheet expansion. These high-speed flows included significant dawn-dusk flows and had a shear structure. They may have been produced by an induced electric field at the local dipolarization region and/or by an enhanced pressure gradient associated with the injection in the midnight plasma sheet.Key words. Magnetospheric physics (magnetospheric configuration and dynamics; plasma sheet; storms and substorms)


2020 ◽  
Author(s):  
Sina Sadeghzadeh ◽  
Jian Yang

<p><span>Understanding the transport of hot plasma from tail towards the inner magnetosphere is of great importance to improve our perception of the near-Earth space environment. In accordance with the recent observations, the contribution of bursty bulk flows (BBFs)/bubbles in the inner plasma sheet especially in the storm-time ring current formation is nonnegligible. These high-speed plasma flows with depleted flux tube/entropy are likely formed in the mid tail due to magnetic reconnection and injected earthward as a result of interchange instability. In this presentation, we investigate the interplay of these meso-scale structures on the average magnetic field and plasma distribution in various regions of the plasma sheet, using the Inertialized Rice Convection Model (RCM-I). We will discuss the comparison of our simulation results with the observational statistics and data-based empirical models.</span></p>


2018 ◽  
Vol 186 ◽  
pp. 02001 ◽  
Author(s):  
M. Buga ◽  
P. Fernique ◽  
C. Bot ◽  
M. G. Allen ◽  
F. Bonnarel ◽  
...  

High speed Internet and the evolution of data storage space in terms of cost-effectiveness has changed the way data are managed today. Large amounts of heterogeneous data can now be visualized easily and rapidly using interactive applications such as “Google Maps”. In this respect, the Hierarchical Progressive Survey (HiPS) method has been developed by the Centre de Données astronomiques de Strasbourg (CDS) since 2009. HiPS uses the hierarchical sky tessellation called HEALPix to describe and organize images, data cubes or source catalogs. These HiPS can be accessed and visualized using applications such as Aladin. We show that structuring the data using HiPS enables easy and quick access to large and complex sets of astronomical data. As with bibliographic and catalog data, full documentation and comprehensive metadata are absolutely required for pertinent usage of these data. Hence the role of documentalists in the process of producing HiPS is essential. We present the interaction between documentalists and other specialists who are all part of the CDS team and support this process. More precisely, we describe the tools used by the documentalists to generate HiPS or to update the Virtual Observatory standardized descriptive information (the “metadata”). We also present the challenges faced by the documentalists processing such heterogeneous data on the scales of megabytes up to petabytes. On one hand, documentalists at CDS manage small size textual or numerical data for one or few astronomical objects. On the other hand, they process large data sets such as big catalogs containing heterogeneous data like spectra, images or data cubes, for millions of astronomical objects. Finally, by participating in the development of an interactive visualization of images or three-dimensional data cubes using the HiPS method, documentalists contribute to a long-term management of complex, large astronomical data.


2005 ◽  
Vol 11 (1) ◽  
pp. 9-17 ◽  
Author(s):  
H. Narfi Stefansson ◽  
Kevin W. Eliceiri ◽  
Charles F. Thomas ◽  
Amos Ron ◽  
Ron DeVore ◽  
...  

The use of multifocal-plane, time-lapse recordings of living specimens has allowed investigators to visualize dynamic events both within ensembles of cells and individual cells. Recordings of such four-dimensional (4D) data from digital optical sectioning microscopy produce very large data sets. We describe a wavelet-based data compression algorithm that capitalizes on the inherent redunancies within multidimensional data to achieve higher compression levels than can be obtained from single images. The algorithm will permit remote users to roam through large 4D data sets using communication channels of modest bandwidth at high speed. This will allow animation to be used as a powerful aid to visualizing dynamic changes in three-dimensional structures.


Author(s):  
Huabing Zhu ◽  
Lizhe Wang ◽  
Tony K.Y. Chan

Visualization is the process of mapping numerical values into perceptual dimensions and conveying insight into visible phenomena. With the visible phenomena, the human visual system can recognize and interpret complex patterns. One can detect meaning and anomalies in scientific data sets. Another role of visualization is to display new data in order to uncover new knowledge. Hence, visualization has emerged as an important tool widely used in science, medicine, and engineering. As a consequence of our increased ability to model and measure a wide variety of phenomena, data generated for visualization are far beyond the capability of desktop systems. In the near future, we anticipate collecting data at the rate of terabytes per day from numerous classes of applications. These applications can process a huge size of data, which are produced by more sensitive and accurate instruments, for example, telescopes, microscopes, particle accelerators, and satellites (Foster, Insley, Laszewski, Kesselman, & Thiebaux, 1999). Furthermore, the speed of the generation of data is still increasing. Therefore, to visualize large data sets, visualization systems impose more requirements on a variety of resources. For most users, it becomes more difficult to address all requirements on a single computing platform, or for that matter, in a single location. In a distributed computing environment, various resources are available, for example, large volume data storage, supercomputers, video equipment, and so on. At the same time, high speed networks and the advent of multi-disciplinary science mean that the use of remote resources becomes both necessary and feasible (Foster et al., 1999).


2021 ◽  
Author(s):  
Lynn M. Kistler ◽  
Christopher G. Mouikis ◽  
Kazushi Asamura ◽  
Satoshi Kasahara ◽  
Yoshizumi Miyoshi ◽  
...  

<p>The ionospheric and solar wind contributions to the magnetosphere can be distinguished by their composition.  While both sources contain significant H+, the heavy ion species from the ionospheric source are generally singly ionized, while the solar wind consists of highly ionized ions. Both the solar wind and the ionosphere contribute to the plasma sheet.  It has been shown that with both enhanced geomagnetic activity and enhanced solar EUV, the ionospheric contribution, and particularly the ionospheric heavy ions contribution increases.  However, the details of this transition from a solar wind dominated to more ionospheric dominated plasma sheet are not well understood.  An initial study using AMPTE/CHEM data, a data set that includes the full charge state distributions of the major species, shows that the transition can occur quite sharply during storms, with the ionospheric contribution becoming dominant during the storm main phase.  However, during the AMPTE time-period, there were no continuous measurements of the upstream solar wind, and so both the simultaneous solar wind composition and the driving solar wind and IMF parameters were not known.  The HPCA instrument on MMS and both the LEPi and MEPi instruments on Arase are able to measure He++.   With these data sets, the He++/H+ ratio can be compared to the simultaneous He++/H+ ratios in the solar wind to more definitively identify the solar wind contribution to the plasma sheet.  This allows the ionospheric contribution to the H+ population to be determined, so that the full ionospheric population is known. We find that when the IMF turns southward during the storm main phase, the dominant source of the hot plasma sheet becomes ionospheric.  This composition change explains why the storm time ring current also has a high ionospheric contribution.</p>


2021 ◽  
Author(s):  
Timo Kersten ◽  
Viktor Leis ◽  
Thomas Neumann

AbstractAlthough compiling queries to efficient machine code has become a common approach for query execution, a number of newly created database system projects still refrain from using compilation. It is sometimes claimed that the intricacies of code generation make compilation-based engines too complex. Also, a major barrier for adoption, especially for interactive ad hoc queries, is long compilation time. In this paper, we examine all stages of compiling query execution engines and show how to reduce compilation overhead. We incorporate the lessons learned from a decade of generating code in HyPer into a design that manages complexity and yields high speed. First, we introduce a code generation framework that establishes abstractions to manage complexity, yet generates code in a single fast pass. Second, we present a program representation whose data structures are tuned to support fast code generation and compilation. Third, we introduce a new compiler backend that is optimized for minimal compile time, and simultaneously, yields superior execution performance to competing approaches, e.g., Volcano-style or bytecode interpretation. We implemented these optimizations in our database system Umbra to show that it is possible to unite fast compilation and fast execution. Indeed, Umbra achieves unprecedentedly low query latencies. On small data sets, it is even faster than interpreter engines like DuckDB and PostgreSQL. At the same time, on large data sets, its throughput is on par with the state-of-the-art compiling system HyPer.


Sign in / Sign up

Export Citation Format

Share Document