scholarly journals Reformulating Task-Related Component Analysis for Reducing its Computational Complexity

Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.

2022 ◽  
Author(s):  
Kuan-Jung Chiang ◽  
Chi Man Wong ◽  
Feng Wan ◽  
Tzyy-Ping Jung ◽  
Masaki Nakanishi

Numerical simulations with synthetic data were conducted.


Computation ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 16
Author(s):  
George Tsakalidis ◽  
Kostas Georgoulakos ◽  
Dimitris Paganias ◽  
Kostas Vergidis

Business process optimization (BPO) has become an increasingly attractive subject in the wider area of business process intelligence and is considered as the problem of composing feasible business process designs with optimal attribute values, such as execution time and cost. Despite the fact that many approaches have produced promising results regarding the enhancement of attribute performance, little has been done to reduce the computational complexity due to the size of the problem. The proposed approach introduces an elaborate preprocessing phase as a component to an established optimization framework (bpoF) that applies evolutionary multi-objective optimization algorithms (EMOAs) to generate a series of diverse optimized business process designs based on specific process requirements. The preprocessing phase follows a systematic rule-based algorithmic procedure for reducing the library size of candidate tasks. The experimental results on synthetic data demonstrate a considerable reduction of the library size and a positive influence on the performance of EMOAs, which is expressed with the generation of an increasing number of nondominated solutions. An important feature of the proposed phase is that the preprocessing effects are explicitly measured before the EMOAs application; thus, the effects on the library reduction size are directly correlated with the improved performance of the EMOAs in terms of average time of execution and nondominated solution generation. The work presented in this paper intends to pave the way for addressing the abiding optimization challenges related to the computational complexity of the search space of the optimization problem by working on the problem specification at an earlier stage.


2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Van-Khoi Dinh ◽  
Minh-Tuan Le ◽  
Vu-Duc Ngo ◽  
Chi-Hieu Ta

In this paper, a low-complexity linear precoding algorithm based on the principal component analysis technique in combination with the conventional linear precoders, called Principal Component Analysis Linear Precoder (PCA-LP), is proposed for massive MIMO systems. The proposed precoder consists of two components: the first one minimizes the interferences among neighboring users and the second one improves the system performance by utilizing the Principal Component Analysis (PCA) technique. Numerical and simulation results show that the proposed precoder has remarkably lower computational complexity than its low-complexity lattice reduction-aided regularized block diagonalization using zero forcing precoding (LC-RBD-LR-ZF) and lower computational complexity than the PCA-aided Minimum Mean Square Error combination with Block Diagonalization (PCA-MMSE-BD) counterparts while its bit error rate (BER) performance is comparable to those of the LC-RBD-LR-ZF and PCA-MMSE-BD ones.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. V7-V16 ◽  
Author(s):  
Kenji Nose-Filho ◽  
André K. Takahata ◽  
Renato Lopes ◽  
João M. T. Romano

We have addressed blind deconvolution in a multichannel framework. Recently, a robust solution to this problem based on a Bayesian approach called sparse multichannel blind deconvolution (SMBD) was proposed in the literature with interesting results. However, its computational complexity can be high. We have proposed a fast algorithm based on the minimum entropy deconvolution, which is considerably less expensive. We designed the deconvolution filter to minimize a normalized version of the hybrid [Formula: see text]-norm loss function. This is in contrast to the SMBD, in which the hybrid [Formula: see text]-norm function is used as a regularization term to directly determine the deconvolved signal. Results with synthetic data determined that the performance of the obtained deconvolution filter was similar to the one obtained in a supervised framework. Similar results were also obtained in a real marine data set for both techniques.


2013 ◽  
Vol 347-350 ◽  
pp. 2390-2394
Author(s):  
Xiao Fang Liu ◽  
Chun Yang

Nonlinear feature extraction used standard Kernel Principal Component Analysis (KPCA) method has large memories and high computational complexity in large datasets. A Greedy Kernel Principal Component Analysis (GKPCA) method is applied to reduce training data and deal with the nonlinear feature extraction problem for training data of large data in classification. First, a subset, which approximates to the original training data, is selected from the full training data using the greedy technique of the GKPCA method. Then, the feature extraction model is trained by the subset instead of the full training data. Finally, FCM algorithm classifies feature extraction data of the GKPCA, KPCA and PCA methods, respectively. The simulation results indicate that the feature extraction performance of both the GKPCA, and KPCA methods outperform the PCA method. In addition of retaining the performance of the KPCA method, the GKPCA method reduces computational complexity due to the reduced training set in classification.


2014 ◽  
Vol 926-930 ◽  
pp. 2964-2967
Author(s):  
Shou Cheng Zhang

One-unit independent component analysis with reference (ICA-R) is an efficient method capable of extracting a desired source signal by using reference signal. In this paper, a new fast one-unit ICA-R algorithm is derived by using kurtosis contrast function based on new constrained independent component analysis (cICA) theory. The proposed algorithm has lower computational complexity and accurate extraction. Experiments with synthetic signals demonstrate the efficacy and accuracy of the proposed algorithm.


2020 ◽  
Vol 223 (2) ◽  
pp. 934-943
Author(s):  
Alejandro Duran ◽  
Thomas Planès ◽  
Anne Obermann

SUMMARY Probabilistic sensitivity kernels based on the analytical solution of the diffusion and radiative transfer equations have been used to locate tiny changes detected in late arriving coda waves. These analytical kernels accurately describe the sensitivity of coda waves towards velocity changes located at a large distance from the sensors in the acoustic diffusive regime. They are also valid to describe the acoustic waveform distortions (decorrelations) induced by isotropically scattering perturbations. However, in elastic media, there is no analytical solution that describes the complex propagation of wave energy, including mode conversions, polarizations, etc. Here, we derive sensitivity kernels using numerical simulations of wave propagation in heterogeneous media in the acoustic and elastic regimes. We decompose the wavefield into P- and S-wave components at the perturbation location in order to construct separate P to P, S to S, P to S and S to P scattering sensitivity kernels. This allows us to describe the influence of P- and S-wave scattering perturbations separately. We test our approach using acoustic and elastic numerical simulations where localized scattering perturbations are introduced. We validate the numerical sensitivity kernels by comparing them with analytical kernel predictions and with measurements of coda decorrelations on the synthetic data.


2017 ◽  
Vol 2017 ◽  
pp. 1-10
Author(s):  
Feng Wang ◽  
Shuang Wei ◽  
Defu Jiang

Fractionally spaced blind equalizer (BE) based on constant modulus criteria is exploited to compensate for the channel-to-channel mismatch in a digital array radar. We apply the technique of recognition to improve the stability and reliability of the BE. The surveillance of the calibration signal and the convergence property of BE are both implemented with recognition description words. BE with cognitive capability is appropriate for the equalization of a digital array radar with thousands of channels and hundreds of working frequencies, where reliability becomes the most concerned indicator. The improvement of performance in the accidental scenarios is tested via numerical simulations with the cost of increased computational complexity.


2014 ◽  
Vol 1030-1032 ◽  
pp. 1822-1827
Author(s):  
Ning Lv ◽  
Guang Yuan Bai ◽  
Lu Qi Yan ◽  
Yuan Jian Fu

In order to overcome the application limitations of principal component analysis fault diagnose model in non-linear time-varying and reduce computational complexity for process monitoring based on non-linear principal component, we introduced kernel transformation theory of nonlinear space to extract data feature extraction and a fault monitoring model based on kernel principal component analysis (KPCA) for constant value detection was proposed. Through the proper selection of kernel function parameter values, the KPCA model can achieve constant value of process fault detection and has lower computational complexity than other non-linear algorithms. The fault detection experiment for beer fermentation process shows that this method is able to detect process faults in a timely manner and has good real-time performance and accuracy in the batch process of slowly time-varying.


Sign in / Sign up

Export Citation Format

Share Document