Decentralized Dimensionality Reduction for Distributed Tensor Data Across Sensor Networks

2016 ◽  
Vol 27 (11) ◽  
pp. 2174-2186 ◽  
Author(s):  
Junli Liang ◽  
Guoyang Yu ◽  
Badong Chen ◽  
Minghua Zhao
2021 ◽  
pp. 1-18
Author(s):  
Ting Gao ◽  
Zhengming Ma ◽  
Wenxu Gao ◽  
Shuyu Liu

There are three contributions in this paper. (1) A tensor version of LLE (short for Local Linear Embedding algorithm) is deduced and presented. LLE is the most famous manifold learning algorithm. Since its proposal, various improvements to LLE have kept emerging without interruption. However, all these achievements are only suitable for vector data, not tensor data. The proposed tensor LLE can also be used a bridge for various improvements to LLE to transfer from vector data to tensor data. (2) A framework of tensor dimensionality reduction based on tensor mode product is proposed, in which the mode matrices can be determined according to specific criteria. (3) A novel dimensionality reduction algorithm for tensor data based on LLE and mode product (LLEMP-TDR) is proposed, in which LLE is used as a criterion to determine the mode matrices. Benefiting from local LLE and global mode product, the proposed LLEMP-TDR can preserve both local and global features of high-dimensional tenser data during dimensionality reduction. The experimental results on data clustering and classification tasks demonstrate that our method performs better than 5 other related algorithms published recently in top academic journals.


Author(s):  
Lambodar Jena ◽  
Ramakrushna Swain ◽  
N.K. kamila

This paper proposes a layered modular architecture to adaptively perform data mining tasks in large sensor networks. The architecture consists in a lower layer which performs data aggregation in a modular fashion and in an upper layer which employs an adaptive local learning technique to extract a prediction model from the aggregated information. The rationale of the approach is that a modular aggregation of sensor data can serve jointly two purposes: first, the organization of sensors in clusters, then reducing the communication effort, second, the dimensionality reduction of the data mining task, then improving the accuracy of the sensing task . Here we show that some of the algorithms developed within the artificial neuralnetworks tradition can be easily adopted to wireless sensor-network platforms and will meet several aspects of the constraints for data mining in sensor networks like: limited communication bandwidth, limited computing resources, limited power supply, and the need for fault-tolerance. The analysis of the dimensionality reduction obtained from the outputs of the neural-networks clustering algorithms shows that the communication costs of the proposed approach are significantly smaller, which is an important consideration in sensor-networks due to limited power supply. In this paper we will present two possible implementations of the ART and FuzzyART neuralnetworks algorithms, which are unsupervised learning methods for categorization of the sensory inputs. They are tested on a data obtained from a set of several nodes, equipped with several sensors each.


2020 ◽  
Vol 16 (4) ◽  
pp. 155014772091640
Author(s):  
Chenquan Gan ◽  
Junwei Mao ◽  
Zufan Zhang ◽  
Qingyi Zhu

Tensor compression algorithms play an important role in the processing of multidimensional signals. In previous work, tensor data structures are usually destroyed by vectorization operations, resulting in information loss and new noise. To this end, this article proposes a tensor compression algorithm using Tucker decomposition and dictionary dimensionality reduction, which mainly includes three parts: tensor dictionary representation, dictionary preprocessing, and dictionary update. Specifically, the tensor is respectively performed by the sparse representation and Tucker decomposition, from which one can obtain the dictionary, sparse coefficient, and core tensor. Furthermore, the sparse representation can be obtained through the relationship between sparse coefficient and core tensor. In addition, the dimensionality of the input tensor is reduced by using the concentrated dictionary learning. Finally, some experiments show that, compared with other algorithms, the proposed algorithm has obvious advantages in preserving the original data information and denoising ability.


Sign in / Sign up

Export Citation Format

Share Document