scholarly journals The Information Geometry of Sensor Configuration

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5265
Author(s):  
Simon Williams ◽  
Arthur George Suvorov ◽  
Zengfu Wang ◽  
Bill Moran

In problems of parameter estimation from sensor data, the Fisher information provides a measure of the performance of the sensor; effectively, in an infinitesimal sense, how much information about the parameters can be obtained from the measurements. From the geometric viewpoint, it is a Riemannian metric on the manifold of parameters of the observed system. In this paper, we consider the case of parameterized sensors and answer the question, “How best to reconfigure a sensor (vary the parameters of the sensor) to optimize the information collected?” A change in the sensor parameters results in a corresponding change to the metric. We show that the change in information due to reconfiguration exactly corresponds to the natural metric on the infinite-dimensional space of Riemannian metrics on the parameter manifold, restricted to finite-dimensional sub-manifold determined by the sensor parameters. The distance measure on this configuration manifold is shown to provide optimal, dynamic sensor reconfiguration based on an information criterion. Geodesics on the configuration manifold are shown to optimize the information gain but only if the change is made at a certain rate. An example of configuring two bearings-only sensors to optimally locate a target is developed in detail to illustrate the mathematical machinery, with Fast Marching methods employed to efficiently calculate the geodesics and illustrate the practicality of using this approach.

Author(s):  
Parul Agarwal ◽  
Shikha Mehta

Subspace clustering approaches cluster high dimensional data in different subspaces. It means grouping the data with different relevant subsets of dimensions. This technique has become very effective as a distance measure becomes ineffective in a high dimensional space. This chapter presents a novel evolutionary approach to a bottom up subspace clustering SUBSPACE_DE which is scalable to high dimensional data. SUBSPACE_DE uses a self-adaptive DBSCAN algorithm to perform clustering in data instances of each attribute and maximal subspaces. Self-adaptive DBSCAN clustering algorithms accept input from differential evolution algorithms. The proposed SUBSPACE_DE algorithm is tested on 14 datasets, both real and synthetic. It is compared with 11 existing subspace clustering algorithms. Evaluation metrics such as F1_Measure and accuracy are used. Performance analysis of the proposed algorithms is considerably better on a success rate ratio ranking in both accuracy and F1_Measure. SUBSPACE_DE also has potential scalability on high dimensional datasets.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
J. A. Tenreiro Machado

The paper formulates a genetic algorithm that evolves two types of objects in a plane. The fitness function promotes a relationship between the objects that is optimal when some kind of interface between them occurs. Furthermore, the algorithm adopts an hexagonal tessellation of the two-dimensional space for promoting an efficient method of the neighbour modelling. The genetic algorithm produces special patterns with resemblances to those revealed in percolation phenomena or in the symbiosis found in lichens. Besides the analysis of the spacial layout, a modelling of the time evolution is performed by adopting a distance measure and the modelling in the Fourier domain in the perspective of fractional calculus. The results reveal a consistent, and easy to interpret, set of model parameters for distinct operating conditions.


2016 ◽  
Vol 9 (1) ◽  
pp. 41
Author(s):  
Selka Sadiković ◽  
Dina Fesl ◽  
Petar Čolović

The aim of the research was to determine the number, characteristics, and the level of convergence of personality types extracted in the space of the three psycho-lexical conceptualizations of personality – The Big Five, HEXACO, and The Big Seven. The study was conducted on a sample consisting of 343 participants (55.7% female), aged 18–60 (M = 33.99). The participants completed the IPIP-50 (Big Five model operationalization), IPIP-HEXACO (HEXACO model operationalization) and the BF+2-70 (short version of the questionnaire for assessing seven lexical dimensions in Serbian language) questionnaires. Latent profile analysis was conducted in the space of dimension scores of the three questionnaires. The Bayesian information criterion suggested three-class solution to be optimal in the space of all three questionnaires. Analyzing the structure of latent profiles, the classes within the three models were interpreted as “resilient”, “reserved”, and “maladjusted”. The congruency of classes was analyzed by multiple correspondence analyses, which indicated a high convergence of types in the two-dimensional space. Results indicate a distinct similarity between the extracted profiles with the profiles from previous studies, generally pointing towards the stability of the three big personality prototypes.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012024
Author(s):  
X C Sun ◽  
B Wei ◽  
J h Gao ◽  
J C Fu ◽  
Z G Li

Abstract This paper investigates impact degree of blast furnace related elements towards blast furnace gas (BFG) production. BFG is a by-product in the steel industry, which is one of the enterprise’s most essential energy resources. While because multiple factors affect BFG production it has characteristics of large fluctuations. Most works focus on finding a satisfactory method or improving the accuracy of existing methods to predict BFG production. There are no special studies on the factors that affect the production of BFG. Finding the elements that affect BFG production is benefit to production of BFG, which has a significance in economy. We propose a novel framework, combining cross recurrence plot (CRP) and cross recurrence quantification analysis (CRQA). Moreover, it supplies a general method to convert time series of BFG related data into high-dimensional space. This is the first analytical framework that attempts to reveal the inherent dynamic similarities of blast furnace gas-related elements. The experimental results demonstrate that this framework can realize the visualization of the time series. In addition, the results also identify the factor that has the greatest impact on blast furnace gas production by quantitative analysis.


2018 ◽  
Vol 7 (2) ◽  
pp. 939 ◽  
Author(s):  
Shivakumar B R ◽  
Rajashekararadhya S V

In the past two decades, a significant amount of research has been conducted in the area of information extraction from heterogeneous remotely sensed (RS) datasets. However, it is arduous to exactly predict the behaviour of the classification technique employed due to issues such as the type of the dataset, resolution of the imagery, the presence of mixed pixels, and spectrally overlapping of classes. In this paper, land cover classification of the heterogeneous dataset using classical and Fuzzy based Maximum Likelihood Classifiers (MLC) is presented and compared. Three decision parameters and their significance in pixel assignment is illustrated. The presented Fuzzy based MLC uses a weighted inverse distance measure for defuzzification process. 10 pixels were randomly selected from the study area to illustrate pixel assignment for both the classifiers. The study aims at enhancing the classification accuracy of heterogeneous multispectral remote sensor data characterized by spectrally overlapping classes and mixed pixels. The study additionally aims at obtaining classification results with a confidence level of 95% with ±4% error margin. Classification success rate was analysed using accuracy assessment. Fuzzy based MLC produced significantly higher classification accuracy as compared to classical MLC. The conducted research achieves the expected classification accuracy and proves to be a valuable technique for classification of heterogeneous RS multispectral imagery. 


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1152 ◽  
Author(s):  
Sander Vanden Hautte ◽  
Pieter Moens ◽  
Joachim Van Herwegen ◽  
Dieter De Paepe ◽  
Bram Steenwinckel ◽  
...  

In industry, dashboards are often used to monitor fleets of assets, such as trains, machines or buildings. In such industrial fleets, the vast amount of sensors evolves continuously, new sensor data exchange protocols and data formats are introduced, new visualization types may need to be introduced and existing dashboard visualizations may need to be updated in terms of displayed sensors. These requirements motivate the development of dynamic dashboarding applications. These, as opposed to fixed-structure dashboard applications, allow users to create visualizations at will and do not have hard-coded sensor bindings. The state-of-the-art in dynamic dashboarding does not cope well with the frequent additions and removals of sensors that must be monitored—these changes must still be configured in the implementation or at runtime by a user. Also, the user is presented with an overload of sensors, aggregations and visualizations to select from, which may sometimes even lead to the creation of dashboard widgets that do not make sense. In this paper, we present a dynamic dashboard that overcomes these problems. Sensors, visualizations and aggregations can be discovered automatically, since they are provided as RESTful Web Things on a Web Thing Model compliant gateway. The gateway also provides semantic annotations of the Web Things, describing what their abilities are. A semantic reasoner can derive visualization suggestions, given the Thing annotations, logic rules and a custom dashboard ontology. The resulting dashboarding application automatically presents the available sensors, visualizations and aggregations that can be used, without requiring sensor configuration, and assists the user in building dashboards that make sense. This way, the user can concentrate on interpreting the sensor data and detecting and solving operational problems early.


2021 ◽  
Vol 7 (2) ◽  
pp. 121
Author(s):  
Raden Gunawan Santosa ◽  
Yuan Lukito ◽  
Antonius Rachmat Chrismanto

Salah satu algoritma clustering yang paling banyak dipakai adalah K-Means dimana algoritma ini membutuhkan masukan jumlah klaster yang ingin dibentuk.  Pada kenyataannya jumlah klaster yang tepat tidak bisa diketahui sehingga pemilihan nilai k bergantung pada subyektifitas peneliti. Kemudian algoritma K-Means hanya bisa menangani atribut dalam bentuk numerik kontinyu padahal ada atribut dalam bentuk kategorikal atau campuran keduanya.  Pada penelitian ini dilakukan pengelompokkan data akademik mahasiswa dengan menggunakan algoritma twostep clustering yang dapat menentukan jumlah klaster secara otomatis dan dapat menangani atribut dalam bentuk kategorikal, numerik kontinyu atau campuran keduanya. Metode twostep clustering diterapkan pada data mahasiswa angkatan 2008-2019 dengan analisis diterapkan pada setiap angkatannya. Penelitian ini menghasilkan klaster-klaster yang mencerminkan tingkat heterogenitas setiap angkatan mahasiswa.  Klaster-klaster yang didapat merupakan klaster yang optimal setelah diukur menggunakan Bayesian Information Criterion dan Ratio Distance Measure.


2021 ◽  
Author(s):  
Simone Müller ◽  
Dieter Kranzlmüller

Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.


Author(s):  
Kimio Oguchi ◽  
Ryoya Ozawa

<p>The recent rapid progress in ICT technologies such as smart/intelligent sensor devices, broadband/ubiquitous networks, and Internet of everything (IoT) has advanced the penetration of sensor networks and their applications. The requirements of human daily life, security, energy efficiency, safety, comfort, and ecological, can be achieved with the help of these networks and applications. Traditionally, if we want some information on, for example, environment status, a variety of dedicated sensors is needed. This will increase the number of sensors installed and thus system cost, sensor data traffic loads, and installation difficulty. Therefore, we need to find redundancies in the captured information or interpret the semantics captured by non-dedicated sensors to reduce sensor network overheads. This paper clarifies the feasibility of recognizing human presence in a space by processing information captured by other than dedicated sensors. It proposes a method and implements it as a cost-effective prototype sensor network for a university library. This method processes CO2 concentration, originally designed to check environment status. In the experiment, training data is captured with none, one, or two subjects. The information gain (IG) method is applied to the resulting data, to set thresholds and thus judge the number of people. Human presence (none, one or two people) is accurately recognized from the CO2 concentration data. The experiments clarify that a CO2 sensor in set in a small room to check environment status can recognize the number of humans in the room with more than 70 % accuracy. This eliminates the need for an extra sensor, which reduces sensor network cost.</p>


Author(s):  
D. Shiny Irene ◽  
V. Surya ◽  
D. Kavitha ◽  
R. Shankar ◽  
S. John Justin Thangaraj

The objective of the research work is to analyze and validate health records and securing the personal information of patients is a challenging issue in health records mining. The risk prediction task was formulated with the label Cause of Death (COD) as a multi-class classification issue, which views health-related death as the “biggest risk.” This unlabeled data particularly describes the health conditions of the participants during the health examinations. It can differ tremendously between healthy and highly ill. Besides, the problems of distributed secure data management over privacy-preserving are considered. The proposed health record mining is in the following stages. In the initial stage, effective features such as fisher score, Pearson correlation, and information gain is calculated from the health records of the patient. Then, the average values are calculated for the extracted features. In the second stage, feature selection is performed from the average features by applying the Euclidean distance measure. The chosen features are clustered in the third stage using distance adaptive fuzzy c-means clustering algorithm (DAFCM). In the fourth stage, an entropy-based graph is constructed for the classification of data and it categorizes the patient’s record. At the last stage, for security, privacy preservation is applied to the personal information of the patient. This performance is matched against the existing methods and it gives better performance than the existing ones.


Sign in / Sign up

Export Citation Format

Share Document