scholarly journals Hyperspectral Image Processing in Internet of Things model using Clustering Algorithm

2021 ◽  
Vol 3 (2) ◽  
pp. 163-175
Author(s):  
Bindhu V ◽  
Ranganathan G

With the advent of technology, several domains have b on Internet of Things (IoT). The hyper spectral sensors present in earth observation system sends hyper spectral images (HSIs) to the cloud for further processing. Artificial intelligence (AI) models are used to analyse data in edge servers, resulting in a faster response time and reduced cost. Hyperspectral images and other high-dimensional image data may be analysed by using a core AI model called subspace clustering. The existing subspace clustering algorithms are easily affected by noise since they are constructed based on a single model. The representation coefficient matrix connectivity and sparsity is hardly balanced. In this paper, connectivity and sparsity factors are considered while proposing the subspace clustering algorithm with post-process strategy. A non-dominated sorting algorithm is used for that selection of close neighbours that are defined as neighbours with high coefficient and common neighbours. Further, pruning of useless, incorrect or reserved connections based on the coefficients between the close and sample neighbours are performed. Lastly, inter and intra subspace connections are reserved by the post-process strategy. In the field of IoT and image recognition, the conventional techniques are compared with the proposed post-processing strategies to verify its effectiveness and universality. The clustering accuracy may be improved in the IoT environment while processing the noise data using the proposed strategy as observed in the experimental results.

Author(s):  
Hind Bangui ◽  
Mouzhi Ge ◽  
Barbora Buhnova

Due to the massive data increase in different Internet of Things (IoT) domains such as healthcare IoT and Smart City IoT, Big Data technologies have been emerged as critical analytics tools for analyzing the IoT data. Among the Big Data technologies, data clustering is one of the essential approaches to process the IoT data. However, how to select a suitable clustering algorithm for IoT data is still unclear. Furthermore, since Big Data technology are still in its initial stage for different IoT domains, it is thus valuable to propose and structure the research challenges between Big Data and IoT. Therefore, this article starts by reviewing and comparing the data clustering algorithms that can be applied in IoT datasets, and then extends the discussions to a broader IoT context such as IoT dynamics and IoT mobile networks. Finally, this article identifies a set of research challenges that harvest a research roadmap for the Big Data research in IoT domains. The proposed research roadmap aims at bridging the research gaps between Big Data and various IoT contexts.


Author(s):  
Billy Peralta ◽  
◽  
Luis Alberto Caro

Generic object recognition algorithms usually require complex classificationmodels because of intrinsic difficulties arising from problems such as changes in pose, lighting conditions, or partial occlusions. Decision trees present an inexpensive alternative for classification tasks and offer the advantage of being simple to understand. On the other hand, a common scheme for object recognition is given by the appearances of visual words, also known as the bag-of-words method. Although multiple co-occurrences of visual words are more informative regarding visual classes, a comprehensive evaluation of such combinations is unfeasible because it would result in a combinatorial explosion. In this paper, we propose to obtain the multiple co-occurrences of visual words using a variant of the CLIQUE subspace-clustering algorithm for improving the object recognition performance of simple decision trees. Experiments on standard object datasets show that our method improves the accuracy of the classification of generic objects in comparison to traditional decision tree techniques that are similar, in terms of accuracy, to ensemble techniques. In future we plan to evaluate other variants of decision trees, and apply other subspace-clustering algorithms.


Author(s):  
Elmustafa Sayed Ali Ahmed ◽  
Zahraa Tagelsir Mohammed ◽  
Mona Bakri Hassan ◽  
Rashid A. Saeed

Internet of vehicles (IoV) has recently become an emerging promising field of research due to the increasing number of vehicles each day. It is a part of the internet of things (IoT) which deals with vehicle communications. As vehicular nodes are considered always in motion, they cause frequent changes in the network topology. These changes cause issues in IoV such as scalability, dynamic topology changes, and shortest path for routing. In this chapter, the authors will discuss different optimization algorithms (i.e., clustering algorithms, ant colony optimization, best interface selection [BIS] algorithm, mobility adaptive density connected clustering algorithm, meta-heuristics algorithms, and quality of service [QoS]-based optimization). These algorithms provide an important intelligent role to optimize the operation of IoV networks and promise to develop new intelligent IoV applications.


2020 ◽  
Vol 12 (15) ◽  
pp. 2421
Author(s):  
Kasra Rafiezadeh Shahi ◽  
Mahdi Khodadadzadeh ◽  
Laura Tusa ◽  
Pedram Ghamisi ◽  
Raimon Tolosana-Delgado ◽  
...  

Hyperspectral imaging techniques are becoming one of the most important tools to remotely acquire fine spectral information on different objects. However, hyperspectral images (HSIs) require dedicated processing for most applications. Therefore, several machine learning techniques were proposed in the last decades. Among the proposed machine learning techniques, unsupervised learning techniques have become popular as they do not need any prior knowledge. Specifically, sparse subspace-based clustering algorithms have drawn special attention to cluster the HSI into meaningful groups since such algorithms are able to handle high dimensional and highly mixed data, as is the case in real-world applications. Nonetheless, sparse subspace-based clustering algorithms usually tend to demand high computational power and can be time-consuming. In addition, the number of clusters is usually predefined. In this paper, we propose a new hierarchical sparse subspace-based clustering algorithm (HESSC), which handles the aforementioned problems in a robust and fast manner and estimates the number of clusters automatically. In the experiment, HESSC is applied to three real drill-core samples and one well-known rural benchmark (i.e., Trento) HSI datasets. In order to evaluate the performance of HESSC, the performance of the new proposed algorithm is quantitatively and qualitatively compared to the state-of-the-art sparse subspace-based algorithms. In addition, in order to have a comparison with conventional clustering algorithms, HESSC’s performance is compared with K-means and FCM. The obtained clustering results demonstrate that HESSC performs well when clustering HSIs compared to the other applied clustering algorithms.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Singh Vijendra ◽  
Sahoo Laxman

Clustering high-dimensional data has been a major challenge due to the inherent sparsity of the points. Most existing clustering algorithms become substantially inefficient if the required similarity measure is computed between data points in the full-dimensional space. In this paper, we have presented a robust multi objective subspace clustering (MOSCL) algorithm for the challenging problem of high-dimensional clustering. The first phase of MOSCL performs subspace relevance analysis by detecting dense and sparse regions with their locations in data set. After detection of dense regions it eliminates outliers. MOSCL discovers subspaces in dense regions of data set and produces subspace clusters. In thorough experiments on synthetic and real-world data sets, we demonstrate that MOSCL for subspace clustering is superior to PROCLUS clustering algorithm. Additionally we investigate the effects of first phase for detecting dense regions on the results of subspace clustering. Our results indicate that removing outliers improves the accuracy of subspace clustering. The clustering results are validated by clustering error (CE) distance on various data sets. MOSCL can discover the clusters in all subspaces with high quality, and the efficiency of MOSCL outperforms PROCLUS.


2020 ◽  
Vol 10 (2) ◽  
pp. 117
Author(s):  
Chandrahas Reddy Addanki ◽  
Saraschandrika A ◽  
Viswanadha Reddy A

The data taken from the hyperspectral images are discrete and hard to classify because they are arranged in the contiguous spectral bands. We can easily detect and classify the data from the spectral images if the number of attributes in the images is very little. But it is very difficult to segregate the data from the images if the numbers of classes are more. To make the segregation easy we implement the procedure that utilizes a clustering algorithm. This paper comprises of two sections, firstly to perform unsupervised learning using different types of clustering algorithms and secondly, to compare the efficiency of the resultant clustering of these different methods to prove that which clustering method is best suitable in reading the hyperspectral imaging data. For this I have used these clustering algorithms, they are DBSCAN, MiniBatch K-Means, K-Means. By comparing these techniques I surmised that the K-Means is better for using the HyperSpectral Imaging data. To perform these calculations I used the Matlab data set from the Computational Intelligence Group.


Author(s):  
Amitava Datta ◽  
Amardeep Kaur ◽  
Tobias Lauer ◽  
Sami Chabbouh

Abstract Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.


2021 ◽  
Author(s):  
Parul Agarwal ◽  
Shikha Mehta ◽  
Ajith Abraham

Abstract Subspace clustering is one of the efficient techniques for determining the clusters in different subsets of dimensions. Ideally, these techniques should find all possible non-redundant clusters in which the data point participates. Unfortunately, existing hard subspace clustering algorithms fail to satisfy this property. Additionally, with the increase in dimensions of data, classical subspace algorithms become inefficient. This work presents a new density-based subspace clustering algorithm (S_FAD) to overcome the drawbacks of classical algorithms. S_FAD is based on a bottom-up approach and finds subspace clusters of varied density using different parameters of the DBSCAN algorithm. The algorithm optimizes parameters of the DBCAN algorithm through a hybrid meta-heuristic algorithm and uses hashing concepts to discover all non-redundant subspace clusters. The efficacy of S_FAD is evaluated against various existing subspace clustering algorithms on artificial and real datasets in terms of F_Score and rand_index. Performance is assessed based on three parameters: average ranking, SRR ranking, and scalability on varied dimensions. Statistical analysis is performed through the Wilcoxon signed-rank test. Results reveal that S_FAD performs considerably better on the majority of the datasets and scales well up to 6400 dimensions on the actual dataset.


Author(s):  
Mohana Priya K ◽  
Pooja Ragavi S ◽  
Krishna Priya G

Clustering is the process of grouping objects into subsets that have meaning in the context of a particular problem. It does not rely on predefined classes. It is referred to as an unsupervised learning method because no information is provided about the "right answer" for any of the objects. Many clustering algorithms have been proposed and are used based on different applications. Sentence clustering is one of best clustering technique. Hierarchical Clustering Algorithm is applied for multiple levels for accuracy. For tagging purpose POS tagger, porter stemmer is used. WordNet dictionary is utilized for determining the similarity by invoking the Jiang Conrath and Cosine similarity measure. Grouping is performed with respect to the highest similarity measure value with a mean threshold. This paper incorporates many parameters for finding similarity between words. In order to identify the disambiguated words, the sense identification is performed for the adjectives and comparison is performed. semcor and machine learning datasets are employed. On comparing with previous results for WSD, our work has improvised a lot which gives a percentage of 91.2%


Sign in / Sign up

Export Citation Format

Share Document