Retraction Note to: An Independent Reconstruction Error Using Randomized Quantization

Author(s):  
S. Arunadevi ◽  
S. Sathya
Keyword(s):  
2010 ◽  
Vol 3 (1) ◽  
pp. 28-30 ◽  
Author(s):  
S. Brandao ◽  
P. Figueiredo ◽  
P. Goncalves ◽  
J. P. Vilas-Boas ◽  
R. J. Fernandes

2021 ◽  
Vol 13 (2) ◽  
pp. 268
Author(s):  
Xiaochen Lv ◽  
Wenhong Wang ◽  
Hongfu Liu

Hyperspectral unmixing is an important technique for analyzing remote sensing images which aims to obtain a collection of endmembers and their corresponding abundances. In recent years, non-negative matrix factorization (NMF) has received extensive attention due to its good adaptability for mixed data with different degrees. The majority of existing NMF-based unmixing methods are developed by incorporating additional constraints into the standard NMF based on the spectral and spatial information of hyperspectral images. However, they neglect to exploit the nature of imbalanced pixels included in the data, which may cause the pixels mixed with imbalanced endmembers to be ignored, and thus the imbalanced endmembers generally cannot be accurately estimated due to the statistical property of NMF. To exploit the information of imbalanced samples in hyperspectral data during the unmixing procedure, in this paper, a cluster-wise weighted NMF (CW-NMF) method for the unmixing of hyperspectral images with imbalanced data is proposed. Specifically, based on the result of clustering conducted on the hyperspectral image, we construct a weight matrix and introduce it into the model of standard NMF. The proposed weight matrix can provide an appropriate weight value to the reconstruction error between each original pixel and the reconstructed pixel in the unmixing procedure. In this way, the adverse effect of imbalanced samples on the statistical accuracy of NMF is expected to be reduced by assigning larger weight values to the pixels concerning imbalanced endmembers and giving smaller weight values to the pixels mixed by majority endmembers. Besides, we extend the proposed CW-NMF by introducing the sparsity constraints of abundance and graph-based regularization, respectively. The experimental results on both synthetic and real hyperspectral data have been reported, and the effectiveness of our proposed methods has been demonstrated by comparing them with several state-of-the-art methods.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Zhe Yang ◽  
Dejan Gjorgjevikj ◽  
Jianyu Long ◽  
Yanyang Zi ◽  
Shaohui Zhang ◽  
...  

AbstractSupervised fault diagnosis typically assumes that all the types of machinery failures are known. However, in practice unknown types of defect, i.e., novelties, may occur, whose detection is a challenging task. In this paper, a novel fault diagnostic method is developed for both diagnostics and detection of novelties. To this end, a sparse autoencoder-based multi-head Deep Neural Network (DNN) is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data. The detection of novelties is based on the reconstruction error. Moreover, the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function, instead of performing the pre-training and fine-tuning phases required for classical DNNs. The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer. The results show that its performance is satisfactory both in detection of novelties and fault diagnosis, outperforming other state-of-the-art methods. This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect, but also detect unknown types of defects.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Guanglei Xu ◽  
William S. Oates

AbstractRestricted Boltzmann Machines (RBMs) have been proposed for developing neural networks for a variety of unsupervised machine learning applications such as image recognition, drug discovery, and materials design. The Boltzmann probability distribution is used as a model to identify network parameters by optimizing the likelihood of predicting an output given hidden states trained on available data. Training such networks often requires sampling over a large probability space that must be approximated during gradient based optimization. Quantum annealing has been proposed as a means to search this space more efficiently which has been experimentally investigated on D-Wave hardware. D-Wave implementation requires selection of an effective inverse temperature or hyperparameter ($$\beta $$ β ) within the Boltzmann distribution which can strongly influence optimization. Here, we show how this parameter can be estimated as a hyperparameter applied to D-Wave hardware during neural network training by maximizing the likelihood or minimizing the Shannon entropy. We find both methods improve training RBMs based upon D-Wave hardware experimental validation on an image recognition problem. Neural network image reconstruction errors are evaluated using Bayesian uncertainty analysis which illustrate more than an order magnitude lower image reconstruction error using the maximum likelihood over manually optimizing the hyperparameter. The maximum likelihood method is also shown to out-perform minimizing the Shannon entropy for image reconstruction.


Author(s):  
ZHI-YONG LIU ◽  
HONG QIAO ◽  
LEI XU

By minimizing the mean square reconstruction error, multisets mixture learning (MML) provides a general approach for object detection in image. To calculate each sample reconstruction error, as the object template is represented by a set of contour points, the MML needs to inefficiently enumerate the distances between the sample and all the contour points. In this paper, we develop the line segment approximation (LSA) algorithm to calculate the reconstruction error, which is shown theoretically and experimentally to be more efficient than the enumeration method. It is also experimentally illustrated that the MML based algorithm has a better noise resistance ability than the generalized Hough transform (GHT) based counterpart.


Author(s):  
Shuhei Tarashima ◽  
Jingjing Pan ◽  
Go Irie ◽  
Takayuki Kurozumi ◽  
Tetsuya Kinebuchi

2021 ◽  
Vol 15 (3) ◽  
pp. 1-33
Author(s):  
Jingjing Wang ◽  
Wenjun Jiang ◽  
Kenli Li ◽  
Keqin Li

CANDECOMP/PARAFAC (CP) decomposition is widely used in various online social network (OSN) applications. However, it is inefficient when dealing with massive and incremental data. Some incremental CP decomposition (ICP) methods have been proposed to improve the efficiency and process evolving data, by updating decomposition results according to the newly added data. The ICP methods are efficient, but inaccurate because of serious error accumulation caused by approximation in the incremental updating. To promote the wide use of ICP, we strive to reduce its cumulative errors while keeping high efficiency. We first differentiate all possible errors in ICP into two types: the cumulative reconstruction error and the prediction error. Next, we formulate two optimization problems for reducing the two errors. Then, we propose several restarting strategies to address the two problems. Finally, we test the effectiveness in three typical dynamic OSN applications. To the best of our knowledge, this is the first work on reducing the cumulative errors of the ICP methods in dynamic OSNs.


Sign in / Sign up

Export Citation Format

Share Document