scholarly journals Re-entry prediction of objects with low-eccentricity orbits based on mean ballistic coefficients

2020 ◽  
Vol 29 (1) ◽  
pp. 210-219
Author(s):  
Zhang Wei ◽  
Cui Wen ◽  
Wang Xiuhong ◽  
Wei Dong ◽  
Liu Xing

AbstractDuring re-entry objects with low-eccentricity orbits traverse a large portion of the dense atmospheric region almost every orbital revolution. Their perigee decays slowly, but the apogee decays rapidly. Because ballistic coefficients change with altitude, re-entry predictions of objects in low-eccentricity orbits are more difficult than objects in nearly circular orbits. Problems in orbit determination, such as large residuals and non-convergence, arise for this class of objects, especially in the case of sparse observations. In addition, it might be difficult to select suitable initial ballistic coefficient for re-entry prediction. We present a new re-entry prediction method based on mean ballistic coefficients for objects with low-eccentricity orbits. The mean ballistic coefficient reflects the average effect of atmospheric drag during one orbital revolution, and the coefficient is estimated using a semi-numerical method with a step size of one period. The method is tested using Iridium-52 which uses sparse observations as the data source, and ten other objects with low-eccentricity orbits which use TLEs as the data source. We also discuss the performance of the mean ballistic coefficient when used in the evolution of drag characteristics and orbit propagation. The results show that the mean ballistic coefficient is ideal for re-entry prediction and orbit propagation of objects with low-eccentricity orbits.

2021 ◽  
Author(s):  
Nianyue Wu ◽  
Siru Liu ◽  
Haotian Zhang ◽  
Xiaomin Hou ◽  
Ping Zhang ◽  
...  

BACKGROUND The intensive care unit (ICU) length of stay is significant to evaluate the effect of cardiac surgical treatment inpatient. OBJECTIVE This research aims to accurately predict the ICU length of stay in patients with cardiac surgery. Methods: We used machine learning methods to construct the model, and the medical information mart for intensive care (MIMIC IV) database was used as the data source. A total of 7,567 patients were enrolled and the mean length of stay in the ICU was 3.12 days. A total of 126 predictors were included, and 44 important predictors were screened by least absolute shrinkage and selection operator (Lasso) regression. METHODS We used machine learning methods to construct the model, and the medical information mart for intensive care (MIMIC IV) database was used as the data source. A total of 7,567 patients were enrolled and the mean length of stay in the ICU was 3.12 days. A total of 126 predictors were included, and 44 important predictors were screened by least absolute shrinkage and selection operator (Lasso) regression. RESULTS The mean accuracy are 0.603 (95% confidence interval (CI): [0.602-0.604]), 0.687 (95% confidence interval (CI): [0.687-0.688]) and 0.688 (95% confidence interval (CI): [0.687-0.689]) for the logistic regression (LR) with all variables, the gradient boosted decision tree (GBDT) with important variables and the GBDT with all variables respectively. CONCLUSIONS The GBDT model with important predictors partly overestimated patients whose length of stay was less than 3 days and underestimated patients whose length of stay was longer than 3 days. But the better prediction performance of GBDT facilitates early intervention of ICU patients with a long period of hospitalization.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Wuwei Liu ◽  
Jingdong Yan

In recent years, people are more and more interested in time series modeling and its application in prediction. This paper mainly discusses a financial time series image algorithm based on wavelet analysis and data fusion. In this research, we conducted an in-depth study on the scale decomposition sequence and wavelet transform sequence in different scale domains of wavelet transform according to the scale change rule based on wavelet transform. We use wavelet neural network with different input neurons and hidden neurons to predict, respectively. Finally, the prediction results are integrated into the final prediction results based on the original time series by using wavelet reconstruction technology. Using RBF algorithm in neural network and SPSS Clementine, the wavelet transform sequences on five scales are modeled. Each network model has three layers: one input layer, one hidden layer, and one output layer, and each output layer has only one output element. In order to compare the prediction effect of the model proposed in this study, the ordinary RBF network is used to model and predict the log yield itself. When the input sample is 5, the minimum mean square error is obtained when the hidden layer is 6, and the mean square error is 1.6349. The mean square error of the training phase is 0.0209, and the validation error is 1.6141. The results show that the prediction results of the wavelet prediction method combined with the RBF network prediction method are better than those of wavelet prediction or RBF network prediction.


2017 ◽  
Vol 1 (2) ◽  
pp. 15-23
Author(s):  
Ismail Ismail

The extended writing allowed the incorporation of the process into the assessment of writing skills and encouraged increased students autonomy. The research was conducted through a classroom action research (CAR) that comprising 18 students. The results of the student's writing text in the cycle 1 and cycle 2 had increased in different scores. The means scores in data source where 61.67 become 63.61 in cycle 1, and after revision, in the cycle 2, the mean score in cycle 2 was 74.72 by application of extended writing project assessment. The successful minimal criterion (KKM) was 70. It can be seen that the application of Extended Writing Project Assessment was increased significantly. The most obvious consequence of this is that the ideas presented by the students are better, and the time needed to complete an essay is much more efficient.


2019 ◽  
Vol 79 (10) ◽  
Author(s):  
Lorenzo Iorio

Abstract The distinction between the mean anomaly $${\mathcal {M}}(t)$$M(t) and the mean anomaly at epoch $$\eta $$η, and the mean longitude l(t) and the mean longitude at epoch $$\epsilon $$ϵ is clarified in the context of a their possible use in post-Keplerian tests of gravity, both Newtonian and post-Newtonian. In particular, the perturbations induced on $${\mathcal {M}}(t),\,\eta ,\,l(t),\,\epsilon $$M(t),η,l(t),ϵ by the post-Newtonian Schwarzschild and Lense–Thirring fields, and the classical accelerations due to the atmospheric drag and the oblateness $$J_2$$J2 of the central body are calculated for an arbitrary orbital configuration of the test particle and a generic orientation of the primary’s spin axis $$\varvec{{\hat{S}}}$$S^. They provide us with further observables which could be fruitfully used, e.g., in better characterizing astrophysical binary systems and in more accurate satellite-based tests around major bodies of the Solar System. Some erroneous claims by Ciufolini and Pavlis appeared in the literature are confuted. In particular, it is shown that there are no net perturbations of the Lense–Thirring acceleration on either the semimajor axis a and the mean motion $$n_{\mathrm{b}}$$nb. Furthermore, the quadratic signatures on $${\mathcal {M}}(t)$$M(t) and l(t) due to certain disturbing non-gravitational accelerations like the atmospheric drag can be effectively disentangled from the post-Newtonian linear trends of interest provided that a sufficiently long temporal interval for the data analysis is assumed. A possible use of $$\eta $$η along with the longitudes of the ascending node $$\Omega $$Ω in tests of general relativity with the existing LAGEOS and LAGEOS II satellites is suggested.


2016 ◽  
Vol 24 (1) ◽  
pp. 25-57 ◽  
Author(s):  
Hans-Georg Beyer ◽  
Michael Hellwig

The behavior of the [Formula: see text]-Evolution Strategy (ES) with cumulative step size adaptation (CSA) on the ellipsoid model is investigated using dynamic systems analysis. At first a nonlinear system of difference equations is derived that describes the mean value evolution of the ES. This system is successively simplified to finally allow for deriving closed-form solutions of the steady state behavior in the asymptotic limit case of large search space dimensions. It is shown that the system exhibits linear convergence order. The steady state mutation strength is calculated, and it is shown that compared to standard settings in [Formula: see text] self-adaptive ESs, the CSA control rule allows for an approximately [Formula: see text]-fold larger mutation strength. This explains the superior performance of the CSA in non-noisy environments. The results are used to derive a formula for the expected running time. Conclusions regarding the choice of the cumulation parameter c and the damping constant D are drawn.


Author(s):  
Engin Cemal Mengüç

This study introduces an adaptive Fourier linear combiner (FLC) based on a modified least mean kurtosis (LMK) algorithm in order to effectively process sinusoidal signals, which we call FLC-LMK algorithm. In the design procedure of the proposed FLC-LMK algorithm, the classical kurtosis-based cost function is first modified for only sinusoidal signal distributions instead of Gaussian. Then, the FLC-LMK algorithm is derived from the minimization of this cost function and thus updates the weight coefficients of the FLC structure so as to directly process sinusoidal signals. Moreover, in this study, the convergence in the mean of the proposed FLC-LMK algorithm is analysed in order to determine the lower and upper bounds of its step size parameter. The most important contributions of the use of the proposed algorithm in the FLC structure are that it increases the convergence rate, decreases the steady-state error level and also has a robust behaviour against sinusoidal signal distributions due to its modified cost function. The performance of the proposed FLC-LMK algorithm is evaluated on the synthetic and real-world pathological hand tremor data by comparing with that of the FLC based on the classical least mean square (LMS) (FLC-LMS) algorithm. The simulation results support the mentioned properties of the proposed FLC-LMK algorithm.


1975 ◽  
Vol 42 (1) ◽  
pp. 51-54 ◽  
Author(s):  
N. W. Wilson ◽  
R. S. Azad

A single set of equations is developed to predict the mean flow characteristics in long circular pipes operating at laminar, transitional, and turbulent Reynolds numbers. Generally good agreement is obtained with available data in the Reynolds number range 100 < Re < 500,000.


2017 ◽  
Vol 20 (10) ◽  
pp. 1586-1598 ◽  
Author(s):  
Y Han ◽  
Z Q Chen ◽  
X G Hua ◽  
Z Q Feng ◽  
GJ Xu

This article presents a procedure for analyzing wind effects on the rigid frame bridges with twin-legged high piers during erection stages, taking into account all wind loading components both on the beam and on the piers. These wind loading components include the mean wind load and the load induced by the three turbulence wind components and by the wake excitation. The buffeting forces induced by turbulence wind are formulated considering the modification due to aerodynamic admittance functions. The buffeting responses are analyzed based on the coherence of buffeting forces and using finite element method in conjunction with random vibration theory in the frequency domain. The peak dynamic response is obtained by combining the various response components through gust response factor approach. The procedure is applied to Xiaoguan Bridge under different erection stages using the analytic aerodynamic parameters fitted from computational fluid dynamics. The numerical results indicate that the obtained peak structural responses are more conservative and accurate when considering the effect of each loading component on the beam and on the piers, and the roles of different loading components are different with regard to bridge configurations. Aerodynamic admittance functions are source of the important part of the error margin of the analytical prediction method for buffeting responses of bridges, and buffeting responses based on wind velocity coherence will underestimate the results.


2013 ◽  
Vol 750 ◽  
pp. 302-305
Author(s):  
Ting Ting Yao ◽  
Hong Lin Liu ◽  
Wan Yu Ding ◽  
Dong Ying Ju ◽  
Wei Ping Chai

N-doped TiO2 films were prepared by using N ion beams to bombard TiO2 films surface. By controlling the metal ultrahigh vacuum gat valve, only the N ion beams working pressure was adjusted from 0.1 to 0.9 Pa, with the step size of 0.2 Pa. The composition, chemical bond structure, and optical properties of N-doped TiO2 films were investigated. The result indicated that with increasing the ion source working pressure, more N ions were generated and bombarded with the surface TiO2 films, which could result in more N ions were doped into the films. So with increasing the ion source working pressure from 0.1 to 0.9 Pa, the N/Ti and O/Ti atom ratio increased and decreased monotonously from 0.37 to 0.49 and 1.49 to 0.61, respectively. Meanwhile, because of more N doped into films, the mean absorbency of N doped TiO2 films in the visible range also increased monotonously from 4.8% to 45.8%.


2020 ◽  
Author(s):  
Armin Corbin ◽  
Kristin Vielberg ◽  
Michael Schmidt ◽  
Jürgen Kusche

&lt;p&gt;&lt;span&gt;The neutral density in the thermosphere is directly related to the atmospheric drag acceleration acting on satellites. In fact, the atmospheric drag acceleration, is the largest non-gravitational perturbation for satellites below 1000 km that has to be considered for precise orbit determination. There are several global empirical and physical models providing the neutral density in the thermosphere. However, there are significant differences between the modeled neutral densities and densities observed via accelerometers. More precise thermospheric density models are required for improving drag modeling as well as orbit determination. We study the coupling between ionosphere and thermosphere based on observations and model outputs of the Thermosphere Ionosphere Electrodynamics General Circulation Model (TIE-GCM). At first, we analyse the model&amp;#8217;s representation of the coupling using electron and neutral densities. In comparison, we study the coupling based on observations, i.e., accelerometer-derived neutral densities and electron densities from a 4D electron density model based on GNSS and satellite altimetry data as well as radio occultation measurements. We expect that increased electron densities can be related to increased neutral densities. This is indicated for example by a correlation of approximately 55% between the neutral densities and the electron densities computed by the TIE-GCM. Finally, we investigate whether neutral density simulations fit better to in-situ densities from accelerometry when electron densities are assimilated.&lt;/span&gt;&lt;/p&gt;


Sign in / Sign up

Export Citation Format

Share Document