Empirical Modeling of a Rail Tank Car for Struck Coupler Force Estimation

Author(s):  
Brad M. Hopkins ◽  
Dan Maraini ◽  
Andrew Seidel ◽  
Parham Shahidi

Freight rail cars may experience high input forces during a coupling event, which could potentially cause damage to the car body and/or lading. The AAR recommended practice states that cars should not be coupled at speeds greater than 4 mph. However, this recommendation is not always followed and cars are often coupled at much higher speeds. As a result, accelerometers on the car body are sometimes used to monitor impact events. Threshold levels may be set to determine if an over-speed or high-force impact event has occurred. However, a single acceleration value can be difficult to interpret because its relationship to impact force is dependent on many factors, including car type, end-of-car device type, lading type, and loading condition. Dynamic modeling and parametric studies may be used to determine these relationships which can be applied in practice. This paper presents a study on the relationship between struck coupler force and car body acceleration for a series of impacts on a tank car in both loaded and unloaded states. For the loaded condition, the tank was filled with water. The simplest change from an unloaded tank to a loaded tank is the decrease in acceleration for a given force due to the added mass. However, there is additional complexity added to the system due to the sloshing liquid inside the tank. When attempting to model this dynamic system there is added uncertainty in struck coupler force estimation because of the non-linearity in low frequency car body oscillations. Several example data sets are presented in the time and frequency domains to illustrate this point. The data is then used to generate an empirical model using system identification techniques. The results show that the proposed model offers improved characterization of the system as compared to conventional techniques by accounting for the uncertainties introduced by the sloshing liquid in the tank. The proposed technique is computationally efficient and can potentially be implemented in real time. The model is used to estimate struck coupler force and is validated with real data.

Geophysics ◽  
2009 ◽  
Vol 74 (5) ◽  
pp. R59-R67 ◽  
Author(s):  
Igor B. Morozov ◽  
Jinfeng Ma

The seismic-impedance inversion problem is underconstrained inherently and does not allow the use of rigorous joint inversion. In the absence of a true inverse, a reliable solution free from subjective parameters can be obtained by defining a set of physical constraints that should be satisfied by the resulting images. A method for constructing synthetic logs is proposed that explicitly and accurately satisfies (1) the convolutional equation, (2) time-depth constraints of the seismic data, (3) a background low-frequency model from logs or seismic/geologic interpretation, and (4) spectral amplitudes and geostatistical information from spatially interpolated well logs. The resulting synthetic log sections or volumes are interpretable in standard ways. Unlike broadly used joint-inversion algorithms, the method contains no subjectively selected user parameters, utilizes the log data more completely, and assesses intermediate results. The procedure is simple and tolerant to noise, and it leads to higher-resolution images. Separating the seismic and subseismic frequency bands also simplifies data processing for acoustic-impedance (AI) inversion. For example, zero-phase deconvolution and true-amplitude processing of seismic data are not required and are included automatically in this method. The approach is applicable to 2D and 3D data sets and to multiple pre- and poststack seismic attributes. It has been tested on inversions for AI and true-amplitude reflectivity using 2D synthetic and real-data examples.


2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Wade T. Rogers ◽  
Herbert A. Holyst

A new software package called flowFP for the analysis of flow cytometry data is introduced. The package, which is tightly integrated with other Bioconductor software for analysis of flow cytometry, provides tools to transform raw flow cytometry data into a form suitable for direct input into conventional statistical analysis and empirical modeling software tools. The approach of flowFP is to generate a description of the multivariate probability distribution function of flow cytometry data in the form of a “fingerprint.” As such, it is independent of a presumptive functional form for the distribution, in contrast with model-based methods such as Gaussian Mixture Modeling. FlowFP is computationally efficient and able to handle extremely large flow cytometry data sets of arbitrary dimensionality. Algorithms and software implementation of the package are described. Use of the software is exemplified with applications to data quality control and to the automated classification of Acute Myeloid Leukemia.


2019 ◽  
Author(s):  
Aurora Torrente

Abstract Background: The concept of depth induces an ordering from centre outwards in multivariate data. Most depth definitions are unfeasible for dimensions larger than three or four, but the Modified Band Depth (MBD) is a notable exception that has proven to be a valuable tool in the analysis of gene expression data. However, given a notion of depth, there exists no straight forward method to derive a depth-based similarity or dissimilarity measure between observations to be used in standard tasks such as clustering or classification. Results: We propose a methodology to assess a data-driven (dis)similarity between two observations, taking advantage of the bands used in the computation of the MBD. To that end, we build binary vectors associated to each observation to compute the number of times each coordinate is located between the limits of the intervals defined by all possible bands in the set. Those vectors and their Boolean products are used to derive contingency tables from which standard similarity indices can be calculated. Our approach is computationally efficient and can be applied to bands formed by any number of observations from the data set. Conclusions: We have evaluated the performance of several similarity indices with respect to that of the Euclidean distance, used as benchmark, in standard clustering and classification techniques in a variety of simulated and real data sets. Our experiments show that the technique for deriving such measures is very promising, with some of the selected indices outperforming the Euclidean distance. The use of the method is not restricted to these, the extension to other similarity coefficients being straight-forward.


2021 ◽  
Author(s):  
Jakob Raymaekers ◽  
Peter J. Rousseeuw

AbstractMany real data sets contain numerical features (variables) whose distribution is far from normal (Gaussian). Instead, their distribution is often skewed. In order to handle such data it is customary to preprocess the variables to make them more normal. The Box–Cox and Yeo–Johnson transformations are well-known tools for this. However, the standard maximum likelihood estimator of their transformation parameter is highly sensitive to outliers, and will often try to move outliers inward at the expense of the normality of the central part of the data. We propose a modification of these transformations as well as an estimator of the transformation parameter that is robust to outliers, so the transformed data can be approximately normal in the center and a few outliers may deviate from it. It compares favorably to existing techniques in an extensive simulation study and on real data.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 62
Author(s):  
Zhengwei Liu ◽  
Fukang Zhu

The thinning operators play an important role in the analysis of integer-valued autoregressive models, and the most widely used is the binomial thinning. Inspired by the theory about extended Pascal triangles, a new thinning operator named extended binomial is introduced, which is a general case of the binomial thinning. Compared to the binomial thinning operator, the extended binomial thinning operator has two parameters and is more flexible in modeling. Based on the proposed operator, a new integer-valued autoregressive model is introduced, which can accurately and flexibly capture the dispersed features of counting time series. Two-step conditional least squares (CLS) estimation is investigated for the innovation-free case and the conditional maximum likelihood estimation is also discussed. We have also obtained the asymptotic property of the two-step CLS estimator. Finally, three overdispersed or underdispersed real data sets are considered to illustrate a superior performance of the proposed model.


Econometrics ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 10
Author(s):  
Šárka Hudecová ◽  
Marie Hušková ◽  
Simos G. Meintanis

This article considers goodness-of-fit tests for bivariate INAR and bivariate Poisson autoregression models. The test statistics are based on an L2-type distance between two estimators of the probability generating function of the observations: one being entirely nonparametric and the second one being semiparametric computed under the corresponding null hypothesis. The asymptotic distribution of the proposed tests statistics both under the null hypotheses as well as under alternatives is derived and consistency is proved. The case of testing bivariate generalized Poisson autoregression and extension of the methods to dimension higher than two are also discussed. The finite-sample performance of a parametric bootstrap version of the tests is illustrated via a series of Monte Carlo experiments. The article concludes with applications on real data sets and discussion.


Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 459
Author(s):  
Anastasios A. Tsonis ◽  
Geli Wang ◽  
Wenxu Lu ◽  
Sergey Kravtsov ◽  
Christopher Essex ◽  
...  

Proxy temperature data records featuring local time series, regional averages from areas all around the globe, as well as global averages, are analyzed using the Slow Feature Analysis (SFA) method. As explained in the paper, SFA is much more effective than the traditional Fourier analysis in identifying slow-varying (low-frequency) signals in data sets of a limited length. We find the existence of a striking gap from ~1000 to about ~20,000 years, which separates intrinsic climatic oscillations with periods ranging from ~ 60 years to ~1000 years, from the longer time-scale periodicities (20,000 yr +) involving external forcing associated with Milankovitch cycles. The absence of natural oscillations with periods within the gap is consistent with cumulative evidence based on past data analyses, as well as with earlier theoretical and modeling studies.


Information ◽  
2021 ◽  
Vol 12 (5) ◽  
pp. 202
Author(s):  
Louai Alarabi ◽  
Saleh Basalamah ◽  
Abdeltawab Hendawi ◽  
Mohammed Abdalla

The rapid spread of infectious diseases is a major public health problem. Recent developments in fighting these diseases have heightened the need for a contact tracing process. Contact tracing can be considered an ideal method for controlling the transmission of infectious diseases. The result of the contact tracing process is performing diagnostic tests, treating for suspected cases or self-isolation, and then treating for infected persons; this eventually results in limiting the spread of diseases. This paper proposes a technique named TraceAll that traces all contacts exposed to the infected patient and produces a list of these contacts to be considered potentially infected patients. Initially, it considers the infected patient as the querying user and starts to fetch the contacts exposed to him. Secondly, it obtains all the trajectories that belong to the objects moved nearby the querying user. Next, it investigates these trajectories by considering the social distance and exposure period to identify if these objects have become infected or not. The experimental evaluation of the proposed technique with real data sets illustrates the effectiveness of this solution. Comparative analysis experiments confirm that TraceAll outperforms baseline methods by 40% regarding the efficiency of answering contact tracing queries.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 474
Author(s):  
Abdulhakim A. Al-Babtain ◽  
Ibrahim Elbatal ◽  
Hazem Al-Mofleh ◽  
Ahmed M. Gemeay ◽  
Ahmed Z. Afify ◽  
...  

In this paper, we introduce a new flexible generator of continuous distributions called the transmuted Burr X-G (TBX-G) family to extend and increase the flexibility of the Burr X generator. The general statistical properties of the TBX-G family are calculated. One special sub-model, TBX-exponential distribution, is studied in detail. We discuss eight estimation approaches to estimating the TBX-exponential parameters, and numerical simulations are conducted to compare the suggested approaches based on partial and overall ranks. Based on our study, the Anderson–Darling estimators are recommended to estimate the TBX-exponential parameters. Using two skewed real data sets from the engineering sciences, we illustrate the importance and flexibility of the TBX-exponential model compared with other existing competing distributions.


Stats ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 28-45
Author(s):  
Vasili B.V. Nagarjuna ◽  
R. Vishnu Vardhan ◽  
Christophe Chesneau

In this paper, a new five-parameter distribution is proposed using the functionalities of the Kumaraswamy generalized family of distributions and the features of the power Lomax distribution. It is named as Kumaraswamy generalized power Lomax distribution. In a first approach, we derive its main probability and reliability functions, with a visualization of its modeling behavior by considering different parameter combinations. As prime quality, the corresponding hazard rate function is very flexible; it possesses decreasing, increasing and inverted (upside-down) bathtub shapes. Also, decreasing-increasing-decreasing shapes are nicely observed. Some important characteristics of the Kumaraswamy generalized power Lomax distribution are derived, including moments, entropy measures and order statistics. The second approach is statistical. The maximum likelihood estimates of the parameters are described and a brief simulation study shows their effectiveness. Two real data sets are taken to show how the proposed distribution can be applied concretely; parameter estimates are obtained and fitting comparisons are performed with other well-established Lomax based distributions. The Kumaraswamy generalized power Lomax distribution turns out to be best by capturing fine details in the structure of the data considered.


Sign in / Sign up

Export Citation Format

Share Document