scholarly journals A Two-Filter Approach for State Estimation Utilizing Quantized Output Data

Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7675
Author(s):  
Angel L. Cedeño ◽  
Ricardo Albornoz ◽  
Rodrigo Carvajal ◽  
Boris I. Godoy ◽  
Juan C. Agüero

Filtering and smoothing algorithms are key tools to develop decision-making strategies and parameter identification techniques in different areas of research, such as economics, financial data analysis, communications, and control systems. These algorithms are used to obtain an estimation of the system state based on the sequentially available noisy measurements of the system output. In a real-world system, the noisy measurements can suffer a significant loss of information due to (among others): (i) a reduced resolution of cost-effective sensors typically used in practice or (ii) a digitalization process for storing or transmitting the measurements through a communication channel using a minimum amount of resources. Thus, obtaining suitable state estimates in this context is essential. In this paper, Gaussian sum filtering and smoothing algorithms are developed in order to deal with noisy measurements that are also subject to quantization. In this approach, the probability mass function of the quantized output given the state is characterized by an integral equation. This integral was approximated by using a Gauss–Legendre quadrature; hence, a model with a Gaussian mixture structure was obtained. This model was used to develop filtering and smoothing algorithms. The benefits of this proposal, in terms of accuracy of the estimation and computational cost, are illustrated via numerical simulations.

Author(s):  
Zixi Han ◽  
Zixian Jiang ◽  
Sophie Ehrt ◽  
Mian Li

Abstract The design of a gas turbine compressor vane carrier (CVC) should meet mechanical integrity requirements on, among others, low-cycle fatigue (LCF). The number of cycles to the LCF failure is the result of cyclic mechanical and thermal strain effects caused by operating conditions on the components. The conventional LCF assessment is usually based on the assumption on standard operating cycles — supplemented by the consideration of predefined extreme operations and safety factors to compensate a potential underestimate on the LCF damage caused by multiple reasons such as non-standard operating cycles. However, real operating cycles can vary significantly from those standard ones considered in the conventional methods. The conventional prediction of LCF life can be very different from real cases, due to the included safety margins. This work presents a probabilistic method to estimate the distributions of the LCF life under varying operating conditions using operational fleet data. Finite element analysis (FEA) results indicate that the first ramp-up loading in each cycle and the turning time before hot-restart cycles are two predominant contributors to the LCF damage. A surrogate model of LCF damage has been built with regard to these two features to reduce the computational cost of FEA. Miner’s rule is applied to calculate the accumulated LCF damage on the component and then obtain the LCF life. The proposed LCF assessment approach has two special points. First, a new data processing technique inspired by the cumulative sum (CUSUM) control chart is proposed to identify the first ramp-up period of each cycle from noised operational data. Second, the probability mass function of the LCF life for a CVC is estimated using the sequential convolution of the single-cycle damage distribution obtained from operational data. The result from the proposed method shows that the mean value of the LCF life at a critical location of the CVC is significantly larger than the calculated result from the deterministic assessment, and the LCF lives for different gas turbines of the same class are also very different. Finally, to avoid high computational cost of sequential convolution, a quick approximation approach for the probability mass function of the LCF life is given. With the capability of dealing with varying operating conditions and noises in the operational data, the enhanced LCF assessment approach proposed in this work provides a probabilistic reference both for reliability analysis in CVC design, and for predictive maintenance in after-sales service.


Author(s):  
Yuki Takashima ◽  
Toru Nakashika ◽  
Tetsuya Takiguchi ◽  
Yasuo Ariki

Abstract Voice conversion (VC) is a technique of exclusively converting speaker-specific information in the source speech while preserving the associated phonemic information. Non-negative matrix factorization (NMF)-based VC has been widely researched because of the natural-sounding voice it achieves when compared with conventional Gaussian mixture model-based VC. In conventional NMF-VC, models are trained using parallel data which results in the speech data requiring elaborate pre-processing to generate parallel data. NMF-VC also tends to be an extensive model as this method has several parallel exemplars for the dictionary matrix, leading to a high computational cost. In this study, an innovative parallel dictionary-learning method using non-negative Tucker decomposition (NTD) is proposed. The proposed method uses tensor decomposition and decomposes an input observation into a set of mode matrices and one core tensor. The proposed NTD-based dictionary-learning method estimates the dictionary matrix for NMF-VC without using parallel data. The experimental results show that the proposed method outperforms other methods in both parallel and non-parallel settings.


2020 ◽  
Vol 176 (2) ◽  
pp. 183-203
Author(s):  
Santosh Chapaneri ◽  
Deepak Jayaswal

Modeling the music mood has wide applications in music categorization, retrieval, and recommendation systems; however, it is challenging to computationally model the affective content of music due to its subjective nature. In this work, a structured regression framework is proposed to model the valence and arousal mood dimensions of music using a single regression model at a linear computational cost. To tackle the subjectivity phenomena, a confidence-interval based estimated consensus is computed by modeling the behavior of various annotators (e.g. biased, adversarial) and is shown to perform better than using the average annotation values. For a compact feature representation of music clips, variational Bayesian inference is used to learn the Gaussian mixture model representation of acoustic features and chord-related features are used to improve the valence estimation by probing the chord progressions between chroma frames. The dimensionality of features is further reduced using an adaptive version of kernel PCA. Using an efficient implementation of twin Gaussian process for structured regression, the proposed work achieves a significant improvement in R2 for arousal and valence dimensions relative to state-of-the-art techniques on two benchmark datasets for music mood estimation.


2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Mario Muñoz-Organero ◽  
Ramona Ruiz-Blázquez

The automatic detection of road related information using data from sensors while driving has many potential applications such as traffic congestion detection or automatic routable map generation. This paper focuses on the automatic detection of road elements based on GPS data from on-vehicle systems. A new algorithm is developed that uses the total variation distance instead of the statistical moments to improve the classification accuracy. The algorithm is validated for detecting traffic lights, roundabouts, and street-crossings in a real scenario and the obtained accuracy (0.75) improves the best results using previous approaches based on statistical moments based features (0.71). Each road element to be detected is characterized as a vector of speeds measured when a driver goes through it. We first eliminate the speed samples in congested traffic conditions which are not comparable with clear traffic conditions and would contaminate the dataset. Then, we calculate the probability mass function for the speed (in 1 m/s intervals) at each point. The total variation distance is then used to find the similarity among different points of interest (which can contain a similar road element or a different one). Finally, a k-NN approach is used for assigning a class to each unlabelled element.


1996 ◽  
Vol 26 (2) ◽  
pp. 213-224 ◽  
Author(s):  
Karl-Heinz Waldmann

AbstractRecursions are derived for a class of compound distributions having a claim frequency distribution of the well known (a,b)-type. The probability mass function on which the recursions are usually based is replaced by the distribution function in order to obtain increasing iterates. A monotone transformation is suggested to avoid an underflow in the initial stages of the iteration. The faster increase of the transformed iterates is diminished by use of a scaling function. Further, an adaptive weighting depending on the initial value and the increase of the iterates is derived. It enables us to manage an arbitrary large portfolio. Some numerical results are displayed demonstrating the efficiency of the different methods. The computation of the stop-loss premiums using these methods are indicated. Finally, related iteration schemes based on the cumulative distribution function are outlined.


1999 ◽  
Vol 13 (3) ◽  
pp. 251-273 ◽  
Author(s):  
Philip J. Fleming ◽  
Burton Simon

We consider an exponential queueing system with multiple stations, each of which has an infinite number of servers and a dedicated arrival stream of jobs. In addition, there is an arrival stream of jobs that choose a station based on the state of the system. In this paper we describe two heavy traffic approximations for the stationary joint probability mass function of the number of busy servers at each station. One of the approximations involves state-space collapse and is accurate for large traffic loads. The state-space in the second approximation does not collapse. It provides an accurate estimate of the stationary behavior of the system over a wide range of traffic loads.


2021 ◽  
Author(s):  
Jaekwang Shin ◽  
Ankush Bansal ◽  
Randy Cheng ◽  
Alan Taub ◽  
Mihaela Banu

Accurate prediction of the defects occurring in incrementally formed parts has been gaining attention in recent years. This interest is because accurate predictions can overcome the limitation in the advancement of incremental forming in industrial-scale implementation, which has been held back by the increase in the cost and development time due to trial and error methods. The finite element method has been widely utilized to predict the defects in the formed part, e.g., bulge. However, the computation time of running these models and their mesh-size dependency in predicting the forming defects represent barriers in adopting these models as part of CAD-FEM-CAE platforms. Thus, robust analytical and data-driven algorithms must be developed for a cost-effective design of complex parts. In this paper, a new analytical model is proposed to predict the bulge location and geometry in two point incremental forming of an aerospace aluminum alloy AA7075-O for a 67° truncated cone. First, the algorithm calculates the region of interest based on the part geometry. A novel shape function and weighted summation method are then utilized to calculate the amplitude of the instability produced by material accumulation during forming, leading to a bulge on the unformed portion of the sample. It was found that the geometric profile of the part influences the shape function, which is a function created to incorporate the effects of process parameter and boundary condition. The calculated profile in each direction is finalized into one 3-dimensional profile, compared with the experimental results for validation. The proposed model has proven to predict an accurate bulge profile with 95% accuracy comparing with experiments with less than 5% computational cost of FEM modeling.


Author(s):  
Panpan Zhang

In this paper, several properties of a class of trees presenting preferential attachment phenomenon—plane-oriented recursive trees (PORTs) are uncovered. Specifically, we investigate the degree profile of a PORT by determining the exact probability mass function of the degree of a node with a fixed label. We compute the expectation and the variance of degree variable via a Pólya urn approach. In addition, we study a topological index, Zagreb index, of this class of trees. We calculate the exact first two moments of the Zagreb index (of PORTs) by using recurrence methods. Lastly, we determine the limiting degree distribution in PORTs that grow in continuous time, where the embedding is done in a Poissonization framework. We show that it is exponential after proper scaling.


Sign in / Sign up

Export Citation Format

Share Document