scholarly journals The Current State and Future Directions of Modeling Thermosphere Density Enhancements During Extreme Magnetic Storms

Author(s):  
Denny M. Oliveira ◽  
Eftyhia Zesta ◽  
Piyush M. Mehta ◽  
Richard J. Licata ◽  
Marcin D. Pilinski ◽  
...  

Satellites, crewed spacecraft and stations in low-Earth orbit (LEO) are very sensitive to atmospheric drag. A satellite’s lifetime and orbital tracking become increasingly inaccurate or uncertain during magnetic storms. Given the planned increase of government and private satellite presence in LEO, the need for accurate density predictions for collision avoidance and lifetime optimization, particularly during extreme events, has become an urgent matter and requires comprehensive international collaboration. Additionally, long-term solar activity models and historical data suggest that solar activity will significantly increase in the following years and decades. In this article, we briefly summarize the main achievements in the research of thermosphere response to extreme magnetic storms occurring particularly after the launching of many satellites with state-of-the-art accelerometers from which high-accuracy density can be determined. We find that the performance of an empirical model with data assimilation is higher than its performance without data assimilation during all extreme storm phases. We discuss how forecasting models can be improved by looking into two directions: first, to the past, by adapting historical extreme storm datasets for density predictions, and second, to the future, by facilitating the assimilation of large-scale thermosphere data sets that will be collected in future events. Therefore, this topic is relevant to the scientific community, government agencies that operate satellites, and the private sector with assets operating in LEO.

2019 ◽  
Vol 15 ◽  
pp. 117693431984907 ◽  
Author(s):  
Tomáš Farkaš ◽  
Jozef Sitarčík ◽  
Broňa Brejová ◽  
Mária Lucká

Computing similarity between 2 nucleotide sequences is one of the fundamental problems in bioinformatics. Current methods are based mainly on 2 major approaches: (1) sequence alignment, which is computationally expensive, and (2) faster, but less accurate, alignment-free methods based on various statistical summaries, for example, short word counts. We propose a new distance measure based on mathematical transforms from the domain of signal processing. To tolerate large-scale rearrangements in the sequences, the transform is computed across sliding windows. We compare our method on several data sets with current state-of-art alignment-free methods. Our method compares favorably in terms of accuracy and outperforms other methods in running time and memory requirements. In addition, it is massively scalable up to dozens of processing units without the loss of performance due to communication overhead. Source files and sample data are available at https://bitbucket.org/fiitstubioinfo/swspm/src


2009 ◽  
Vol 39 (3) ◽  
pp. 131-140 ◽  
Author(s):  
Philip R. O. Payne ◽  
Peter J. Embi ◽  
Chandan K. Sen

A common thread throughout the clinical and translational research domains is the need to collect, manage, integrate, analyze, and disseminate large-scale, heterogeneous biomedical data sets. However, well-established and broadly adopted theoretical and practical frameworks and models intended to address such needs are conspicuously absent in the published literature or other reputable knowledge sources. Instead, the development and execution of multidisciplinary, clinical, or translational studies are significantly limited by the propagation of “silos” of both data and expertise. Motivated by this fundamental challenge, we report upon the current state and evolution of biomedical informatics as it pertains to the conduct of high-throughput clinical and translational research and will present both a conceptual and practical framework for the design and execution of informatics-enabled studies. The objective of presenting such findings and constructs is to provide the clinical and translational research community with a common frame of reference for discussing and expanding upon such models and methodologies.


Author(s):  
Lior Shamir

Abstract Several recent observations using large data sets of galaxies showed non-random distribution of the spin directions of spiral galaxies, even when the galaxies are too far from each other to have gravitational interaction. Here, a data set of $\sim8.7\cdot10^3$ spiral galaxies imaged by Hubble Space Telescope (HST) is used to test and profile a possible asymmetry between galaxy spin directions. The asymmetry between galaxies with opposite spin directions is compared to the asymmetry of galaxies from the Sloan Digital Sky Survey. The two data sets contain different galaxies at different redshift ranges, and each data set was annotated using a different annotation method. The results show that both data sets show a similar asymmetry in the COSMOS field, which is covered by both telescopes. Fitting the asymmetry of the galaxies to cosine dependence shows a dipole axis with probabilities of $\sim2.8\sigma$ and $\sim7.38\sigma$ in HST and SDSS, respectively. The most likely dipole axis identified in the HST galaxies is at $(\alpha=78^{\rm o},\delta=47^{\rm o})$ and is well within the $1\sigma$ error range compared to the location of the most likely dipole axis in the SDSS galaxies with $z>0.15$ , identified at $(\alpha=71^{\rm o},\delta=61^{\rm o})$ .


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 621
Author(s):  
Giuseppe Psaila ◽  
Paolo Fosci

Internet technology and mobile technology have enabled producing and diffusing massive data sets concerning almost every aspect of day-by-day life. Remarkable examples are social media and apps for volunteered information production, as well as Open Data portals on which public administrations publish authoritative and (often) geo-referenced data sets. In this context, JSON has become the most popular standard for representing and exchanging possibly geo-referenced data sets over the Internet.Analysts, wishing to manage, integrate and cross-analyze such data sets, need a framework that allows them to access possibly remote storage systems for JSON data sets, to retrieve and query data sets by means of a unique query language (independent of the specific storage technology), by exploiting possibly-remote computational resources (such as cloud servers), comfortably working on their PC in their office, more or less unaware of real location of resources. In this paper, we present the current state of the J-CO Framework, a platform-independent and analyst-oriented software framework to manipulate and cross-analyze possibly geo-tagged JSON data sets. The paper presents the general approach behind the J-CO Framework, by illustrating the query language by means of a simple, yet non-trivial, example of geographical cross-analysis. The paper also presents the novel features introduced by the re-engineered version of the execution engine and the most recent components, i.e., the storage service for large single JSON documents and the user interface that allows analysts to comfortably share data sets and computational resources with other analysts possibly working in different places of the Earth globe. Finally, the paper reports the results of an experimental campaign, which show that the execution engine actually performs in a more than satisfactory way, proving that our framework can be actually used by analysts to process JSON data sets.


Solar Physics ◽  
2021 ◽  
Vol 296 (1) ◽  
Author(s):  
V. Courtillot ◽  
F. Lopes ◽  
J. L. Le Mouël

AbstractThis article deals with the prediction of the upcoming solar activity cycle, Solar Cycle 25. We propose that astronomical ephemeris, specifically taken from the catalogs of aphelia of the four Jovian planets, could be drivers of variations in solar activity, represented by the series of sunspot numbers (SSN) from 1749 to 2020. We use singular spectrum analysis (SSA) to associate components with similar periods in the ephemeris and SSN. We determine the transfer function between the two data sets. We improve the match in successive steps: first with Jupiter only, then with the four Jovian planets and finally including commensurable periods of pairs and pairs of pairs of the Jovian planets (following Mörth and Schlamminger in Planetary Motion, Sunspots and Climate, Solar-Terrestrial Influences on Weather and Climate, 193, 1979). The transfer function can be applied to the ephemeris to predict future cycles. We test this with success using the “hindcast prediction” of Solar Cycles 21 to 24, using only data preceding these cycles, and by analyzing separately two 130 and 140 year-long halves of the original series. We conclude with a prediction of Solar Cycle 25 that can be compared to a dozen predictions by other authors: the maximum would occur in 2026.2 (± 1 yr) and reach an amplitude of 97.6 (± 7.8), similar to that of Solar Cycle 24, therefore sketching a new “Modern minimum”, following the Dalton and Gleissberg minima.


Cells ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 1030
Author(s):  
Julie Lake ◽  
Catherine S. Storm ◽  
Mary B. Makarious ◽  
Sara Bandres-Ciga

Neurodegenerative diseases are etiologically and clinically heterogeneous conditions, often reflecting a spectrum of disease rather than well-defined disorders. The underlying molecular complexity of these diseases has made the discovery and validation of useful biomarkers challenging. The search of characteristic genetic and transcriptomic indicators for preclinical disease diagnosis, prognosis, or subtyping is an area of ongoing effort and interest. The next generation of biomarker studies holds promise by implementing meaningful longitudinal and multi-modal approaches in large scale biobank and healthcare system scale datasets. This work will only be possible in an open science framework. This review summarizes the current state of genetic and transcriptomic biomarkers in Parkinson’s disease, Alzheimer’s disease, and amyotrophic lateral sclerosis, providing a comprehensive landscape of recent literature and future directions.


GigaScience ◽  
2020 ◽  
Vol 9 (1) ◽  
Author(s):  
T Cameron Waller ◽  
Jordan A Berg ◽  
Alexander Lex ◽  
Brian E Chapman ◽  
Jared Rutter

Abstract Background Metabolic networks represent all chemical reactions that occur between molecular metabolites in an organism’s cells. They offer biological context in which to integrate, analyze, and interpret omic measurements, but their large scale and extensive connectivity present unique challenges. While it is practical to simplify these networks by placing constraints on compartments and hubs, it is unclear how these simplifications alter the structure of metabolic networks and the interpretation of metabolomic experiments. Results We curated and adapted the latest systemic model of human metabolism and developed customizable tools to define metabolic networks with and without compartmentalization in subcellular organelles and with or without inclusion of prolific metabolite hubs. Compartmentalization made networks larger, less dense, and more modular, whereas hubs made networks larger, more dense, and less modular. When present, these hubs also dominated shortest paths in the network, yet their exclusion exposed the subtler prominence of other metabolites that are typically more relevant to metabolomic experiments. We applied the non-compartmental network without metabolite hubs in a retrospective, exploratory analysis of metabolomic measurements from 5 studies on human tissues. Network clusters identified individual reactions that might experience differential regulation between experimental conditions, several of which were not apparent in the original publications. Conclusions Exclusion of specific metabolite hubs exposes modularity in both compartmental and non-compartmental metabolic networks, improving detection of relevant clusters in omic measurements. Better computational detection of metabolic network clusters in large data sets has potential to identify differential regulation of individual genes, transcripts, and proteins.


2013 ◽  
Vol 12 (6) ◽  
pp. 2858-2868 ◽  
Author(s):  
Nadin Neuhauser ◽  
Nagarjuna Nagaraj ◽  
Peter McHardy ◽  
Sara Zanivan ◽  
Richard Scheltema ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document