scholarly journals Space Habitat Data Centers—For Future Computing

Symmetry ◽  
2020 ◽  
Vol 12 (9) ◽  
pp. 1487
Author(s):  
Ayodele Periola ◽  
Akintunde Alonge ◽  
Kingsley Ogudo

Data from sensor-bearing satellites requires processing aboard terrestrial data centres that use water for cooling at the expense of high data-transfer latency. The reliance of terrestrial data centres on water increases their water footprint and limits the availability of water for other applications. Therefore, data centres with low data-transfer latency and reduced reliance on Earth’s water resources are required. This paper proposes space habitat data centres (SHDCs) with low latency data transfer and that use asteroid water to address these challenges. The paper investigates the feasibility of accessing asteroid water and the reduction in computing platform access latency. Results show that the mean asteroid water access period is 319.39 days. The use of SHDCs instead of non-space computing platforms reduces access latency and increases accessible computing resources by 11.9–33.6% and 46.7–77% on average, respectively.

Author(s):  
Ayodele Periola ◽  
Akintunde Alonge ◽  
Kingsley Ogudo

Data from sensor bearing satellites requires processing aboard terrestrial data centers that use water for cooling at the expense of high data transfer latency. The reliance of terrestrial data centers on water increases their water footprint and limits the availability of water for other applications. Therefore, data centers with low data transfer latency and reduced reliance on earth’s water resources are required. This paper proposes space habitat data centers (SHDCs) with low latency data transfer and that use asteroid water to address these challenges. The paper investigates the feasibility of accessing asteroid water and the reduction in computing platform access latency. Results show that the mean asteroid water access period is 319.39 days. The use of SHDCs instead of non-space computing platforms reduces access latency and increases accessible computing resources by (11.9% – 33.6%) and (46.7% – 77%) on average respectively.


2001 ◽  
Vol 674 ◽  
Author(s):  
Ralf Detemple ◽  
Inés Friedrich ◽  
Walter Njoroge ◽  
Ingo Thomas ◽  
Volker Weidenhof ◽  
...  

ABSTRACTVital requirements for the future success of phase change media are high data transfer rates, i.e. fast processes to read, write and erase bits of information. The understanding and optimization of fast transformations is a considerable challenge since the processes only occur on a submicrometer length scale in actual bits. Hence both high temporal and spatial resolution is needed to unravel the essential details of the phase transformation. We employ a combination of fast optical measurements with microscopic analyses using atomic force microscopy (AFM) and transmission electron microscopy (TEM). The AFM measurements exploit the fact that the phase transformation from amorphous to crystalline is accompanied by a 6% volume reduction. This enables a measurement of the vertical and lateral speed of the phase transformation. Several examples will be presented showing the information gained by this combination of techniques.


2002 ◽  
Vol 41 (Part 1, No. 3B) ◽  
pp. 1804-1807 ◽  
Author(s):  
Gakuji Hashimoto ◽  
Hiroki Shima ◽  
Kenji Yamamoto ◽  
Tsutomu Maruyama ◽  
Takashi Nakao ◽  
...  

2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
Gwo-Jiun Horng ◽  
Chi-Hsuan Wang ◽  
Chih-Lun Chou

This paper proposes a tree-based adaptive broadcasting (TAB) algorithm for data dissemination to improve data access efficiency. The proposed TAB algorithm first constructs a broadcast tree to determine the broadcast frequency of each data and splits the broadcast tree into some broadcast wood to generate the broadcast program. In addition, this paper develops an analytical model to derive the mean access latency of the generated broadcast program. In light of the derived results, both the index channel’s bandwidth and the data channel’s bandwidth can be optimally allocated to maximize bandwidth utilization. This paper presents experiments to help evaluate the effectiveness of the proposed strategy. From the experimental results, it can be seen that the proposed mechanism is feasible in practice.


Author(s):  
Luiz Angelo Steffenel ◽  
Manuele Kirsch Pinheiro ◽  
Lucas Vaz Peres ◽  
Damaris Kirsch Pinheiro

The exponential dissemination of proximity computing devices (smartphones, tablets, nanocomputers, etc.) raises important questions on how to transmit, store and analyze data in networks integrating those devices. New approaches like edge computing aim at delegating part of the work to devices in the “edge” of the network. In this article, the focus is on the use of pervasive grids to implement edge computing and leverage such challenges, especially the strategies to ensure data proximity and context awareness, two factors that impact the performance of big data analyses in distributed systems. This article discusses the limitations of traditional big data computing platforms and introduces the principles and challenges to implement edge computing over pervasive grids. Finally, using CloudFIT, a distributed computing platform, the authors illustrate the deployment of a real geophysical application on a pervasive network.


1989 ◽  
Vol 66 (12) ◽  
pp. 6138-6143 ◽  
Author(s):  
Harukazu Miyamoto ◽  
Toshio Niihara ◽  
Hirofumi Sukeda ◽  
Masahiko Takahashi ◽  
Takeshi Nakao ◽  
...  

Author(s):  
Marco Antonio Cruz-Chávez ◽  
Abelardo Rodríguez-León ◽  
Rafael Rivera-López ◽  
Fredy Juárez-Pérez ◽  
Carmen Peralta-Abarca ◽  
...  

Around the world there have recently been new and more powerful computing platforms created that can be used to work with computer science problems. Some of these problems that are dealt with are real problems of the industry; most are classified by complexity theory as hard problems. One such problem is the vehicle routing problem with time windows (VRPTW). The computational Grid is a platform which has recently ventured into the treatment of hard problems to find the best solution for these. This chapter presents a genetic algorithm for the vehicle routing problem with time windows. The algorithm iteratively applies a mutation operator, first of the intelligent type and second of the restricting type. The algorithm takes advantage of Grid computing to increase the exploration and exploitation of the solution space of the problem. The Grid performance is analyzed for a genetic algorithm and a measurement of the latencies that affect the algorithm is studied. The convenience of applying this new computing platform to the execution of algorithms specially designed for Grid computing is presented.


Photonics ◽  
2019 ◽  
Vol 6 (2) ◽  
pp. 70
Author(s):  
Hirayama ◽  
Fujimura ◽  
Umegaki ◽  
Tanaka ◽  
Shimura

Holographic memory is currently attracting attention as a data storage system capable of achieving a data transfer rate of about 105~106105~106 times that of an optical disc such as Blu-ray disc. In conventional holographic memory, data is generally recorded by optical writing using volume holograms. However, a volume hologram has the problem not only that it is required to have high mechanical accuracy of a system and low coefficient of thermal expansion of a recording medium, because reconstruction tolerance is extremely low, but also that duplicating time efficiency is poor because whole data cannot be recorded at once. In this paper we proposed surface holographic memory that achieved a high data transfer rate, stable readout performance, and collective duplication by expressing holograms with fine surface asperity. Furthermore, the theoretical formulas of recording and reconstruction processes in the proposed system were derived and the reconstruction characteristics of the hologram were evaluated by numerical simulation. As a result, the proposed method generated reconstructed image readout with sufficient signal for a single page recording. However, the reconstructed image had noise, which was particular to a surface holographic memory.


2018 ◽  
Vol 210 ◽  
pp. 03015
Author(s):  
Manash Pratim Sarma ◽  
Kandarpa Kumar Sarma ◽  
Nikos E. Mastorakis

There has been a continuous emphasis on an energy efficient communication system design. With the advent of 5G communication technologies, along with a faster and reliable data transfer mechanisms, energy management and conservation is gaining more attention and is becoming a major and indispensable part of communication research. This papers highlights the contemporary technological developments in the field of RF energy harvesting in a cognitive and high data rate network. It has been observed that an efficient RF energy harvesting technology in a cognitive platform definitely leads towards a greener communication paradigm.


2020 ◽  
Vol 94 (10) ◽  
Author(s):  
Qing Liu ◽  
Michael Schmidt ◽  
Laura Sánchez ◽  
Martin Willberg

Abstract This study presents a solution of the ‘1 cm Geoid Experiment’ (Colorado Experiment) using spherical radial basis functions (SRBFs). As the only group using SRBFs among the fourteen participated institutions from all over the world, we highlight the methodology of SRBFs in this paper. Detailed explanations are given regarding the settings of the four most important factors that influence the performance of SRBFs in gravity field modeling, namely (1) the choosing bandwidth, (2) the locations of the SRBFs, (3) the type of the SRBFs as well as (4) the extensions of the data zone for reducing the edge effect. Two types of basis functions covering the same spectral range are used for the terrestrial and the airborne measurements, respectively. The non-smoothing Shannon function is applied to the terrestrial data to avoid the loss of spectral information. The cubic polynomial (CuP) function which has smoothing features is applied to the airborne data as a low-pass filter for filtering the high-frequency noise. Although the idea of combining different SRBFs for different observations was proven in theory to be possible, it is applied to real data for the first time, in this study. The RMS error of our height anomaly result along the GSVS17 benchmarks w.r.t the validation data (which is the mean results of the other contributions in the ‘Colorado Experiment’) drops by 5% when combining the Shannon function for the terrestrial data and the CuP function for the airborne data, compared to those obtained by using the Shannon function for both the two data sets. This improvement indicates the validity and benefits of using different SRBFs for different observation types. Global gravity model (GGM), topographic model, the terrestrial gravity data, as well as the airborne gravity data are combined, and the contribution of each data set to the final solution is discussed. By adding the terrestrial data to the GGM and the topographic model, the RMS error of the height anomaly result w.r.t the validation data drops from 4 to 1.8 cm, and it is further reduced to 1 cm by including the airborne data. Comparisons with the mean results of all the contributions show that our height anomaly and geoid height solutions at the GSVS17 benchmarks have an RMS error of 1.0 cm and 1.3 cm, respectively; and our height anomaly results give an RMS value of 1.6 cm in the whole study area, which are all the smallest among the participants.


Sign in / Sign up

Export Citation Format

Share Document