NASA and Blue Origin’s Flight Assessment of Precision Landing Algorithms Computing Performance

2022 ◽  
Author(s):  
David Rutishauser ◽  
Gavin Mendeck ◽  
Ray Ramadorai ◽  
John Prothro ◽  
Thadeus Fleming ◽  
...  
Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2017 ◽  
Vol 2 (4) ◽  
pp. 25
Author(s):  
L. A. Montoya ◽  
E. E. Rodríguez ◽  
H. J. Zúñiga ◽  
I. Mejía

Rotating systems components such as rotors, have dynamic characteristics that are of great importance to understand because they may cause failure of turbomachinery. Therefore, it is required to study a dynamic model to predict some vibration characteristics, in this case, the natural frequencies and mode shapes (both of free vibration) of a centrifugal compressor shaft. The peculiarity of the dynamic model proposed is that using frequency and displacements values obtained experimentally, it is possible to calculate the mass and stiffness distribution of the shaft, and then use these values to estimate the theoretical modal parameters. The natural frequencies and mode shapes of the shaft were obtained with experimental modal analysis by using the impact test. The results predicted by the model are in good agreement with the experimental test. The model is also flexible with other geometries and has a great time and computing performance, which can be evaluated with respect to other commercial software in the future.


2009 ◽  
Author(s):  
J. Gramss ◽  
R. Galler ◽  
V. Neick ◽  
A. Stoeckel ◽  
U. Weidenmueller ◽  
...  

2012 ◽  
Vol 625 ◽  
pp. 100-103
Author(s):  
Biao Zhao ◽  
Nai Gang Cui ◽  
Ji Feng Guo ◽  
Ping Wang

For the lunar return mission, a concern of the entry guidance requirement is the full flight envelope applicability and landing accuracy control. A concise numeric predictor-corrector (NPC) entry guidance (NPCEG) algorithm is developed for this requirement. It plans a real-time trajectory on-line by modulating the linear parameterized bank profile. To meet the path constraint, we propose an integrated guidance strategy which combines NPC method with an analytical constant drag acceleration method. Monte Carlo analysis shows that the algorithm is sufficiently robust to allow precision landing with a delivery error of less than 2.0 km for the entire between 2,500 km and 10,000 km range.


Author(s):  
Timothy P. Setterfield ◽  
Robert A. Hewitt ◽  
Po-Ting Chen ◽  
Antonio Teran Espinoza ◽  
Nikolas Trawny ◽  
...  
Keyword(s):  

2012 ◽  
Vol 4 (4) ◽  
pp. 68-88
Author(s):  
Chao-Tung Yang ◽  
Wen-Feng Hsieh

This paper’s objective is to implement and evaluate a high-performance computing environment by clustering idle PCs (personal computers) with diskless slave nodes on campuses to obtain the effectiveness of the largest computer potency. Two sets of Cluster platforms, BCCD and DRBL, are used to compare computing performance. It’s to prove that DRBL has better performance than BCCD in this experiment. Originally, DRBL was created to facilitate instructions for a Free Software Teaching platform. In order to achieve the purpose, DRBL is applied to the computer classroom with 32 PCs so to enable PCs to be switched manually or automatically among different OS (operating systems). The bioinformatics program, mpiBLAST, is executed smoothly in the Cluster architecture as well. From management’s view, the state of each Computation Node in Clusters is monitored by “Ganglia”, an existing Open Source. The authors gather the relevant information of CPU, Memory, and Network Load for each Computation Node in every network section. Through comparing aspects of performance, including performance of Swap and different network environment, they attempted to find out the best Cluster environment in a computer classroom at the school. Finally, HPL of HPCC is used to demonstrate cluster performance.


Sign in / Sign up

Export Citation Format

Share Document