Study on Data Fusion Model of Mine Environmental Monitoring

2012 ◽  
Vol 518-523 ◽  
pp. 1334-1339
Author(s):  
Jian Rang Zhang ◽  
Qing Tao Shen

In view of the complexity, redundancy and uncertainty of measuring data generated by mine environment monitoring systems, a structure of two level data fusion, an adaptive weighted first level fusion and a second level fusion of grey correlation analysis, is presented, thus to achieve the fusion for the monitoring data from heterogeneous data sources and the fusion for the data from heterogeneous sources. Application examples shows that the fusion model has stable performance with strong anti- interference and can be handled easily.

2012 ◽  
Vol 591-593 ◽  
pp. 2046-2050
Author(s):  
Sheng Li ◽  
Chun Liang Zhang ◽  
Liang Bin Hu

To effectively avoid the loss of useful information, in this paper, we extract feature information from the fault signal of rotating machinery in different aspects such as amplitude-domain, time-domain and time-frequency domain. Then for the multi-dimensional feature extraction is prone to the problem of “dimension disaster”, introduce the principles of FDR in data mining to determine the classification ability of each individual feature, and introduce the cross correlation coefficient to solve the problem that dealing with individual feature neglects the interrelationship between the features, and construct a new feature level data fusion algorithm. Finally, According to the characteristics of the HMM (Hidden Markov model), SVM (Support Vector Machine) and its hybrid model, we construct a new decision-level data fusion model.


2021 ◽  
Vol 11 (17) ◽  
pp. 8272
Author(s):  
Chun Fu ◽  
Shao-Fei Jiang

Recently, a variety of intelligent structural damage identification algorithms have been developed and have obtained considerable attention worldwide due to the advantages of reliable analysis and high efficiency. However, the performances of existing intelligent damage identification methods are heavily dependent on the extracted signatures from raw signals. This will lead to the intelligent damage identification method becoming the optimal solution for actual application. Furthermore, the feature extraction and neural network training are time-consuming tasks, which affect the real-time performance in identification results directly. To address these problems, this paper proposes a new intelligent data fusion system for damage detection, combining the probabilistic neural network (PNN), data fusion technology with correlation fractal dimension (CFD). The intelligent system consists of three modules (models): the eigen-level fusion model, the decision-level fusion model and a PNN classifier model. The highlight points of this system are these three intelligent models specialized in certain situations. The eigen-level model is specialized in the case of measured data with enormous samples and uncertainties, and for the case of confidence level of each sensor is determined ahead, the decision-level model is the best choice. The single PNN model is considered only when the data collected is somehow limited, or few sensors have been installed. Numerical simulations of a two-span concrete-filled steel tubular arch bridge in service and a seven-storey steel frame in laboratory were used to validate the hybrid system by identifying both single- and multi-damage patterns. The results show that the hybrid data-fusion system has excellent performance of damage identification, and also has superior capability of anti-noise and robustness.


2021 ◽  
Vol 30 (1) ◽  
pp. 947-965
Author(s):  
Shafiza Ariffin Kashinath ◽  
Salama A. Mostafa ◽  
David Lim ◽  
Aida Mustapha ◽  
Hanayanti Hafit ◽  
...  

Abstract Designing a data-responsive system requires accurate input to ensure efficient results. The growth of technology in sensing methods and the needs of various kinds of data greatly impact data fusion (DF)-related study. A coordinative DF framework entails the participation of many subsystems or modules to produce coordinative features. These features are utilized to facilitate and improve solving certain domain problems. Consequently, this paper proposes a general Multiple Coordinative Data Fusion Modules (MCDFM) framework for real-time and heterogeneous data sources. We develop the MCDFM framework to adapt various DF application domains requiring macro and micro perspectives of the observed problems. This framework consists of preprocessing, filtering, and decision as key DF processing phases. These three phases integrate specific purpose algorithms or methods such as data cleaning and windowing methods for preprocessing, extended Kalman filter (EKF) for filtering, fuzzy logic for local decision, and software agents for coordinative decision. These methods perform tasks that assist in achieving local and coordinative decisions for each node in the network of the framework application domain. We illustrate and discuss the proposed framework in detail by taking a stretch of road intersections controlled by a traffic light controller (TLC) as a case study. The case study provides a clearer view of the way the proposed framework solves traffic congestion as a domain problem. We identify the traffic features that include the average vehicle count, average vehicle speed (km/h), average density (%), interval (s), and timestamp. The framework uses these features to identify three congestion periods, which are the nonpeak period with a congestion degree of 0.178 and a variance of 0.061, a medium peak period with a congestion degree of 0.588 and a variance of 0.0593, and a peak period with a congestion degree of 0.796 and a variance of 0.0296. The results of the TLC case study show that the framework provides various capabilities and flexibility features of both micro and macro views of the scenarios being observed and clearly presents viable solutions.


Author(s):  
Hye-Chung Kum

ABSTRACTObjectiveWhen analyzing population data, there is a need to link data about organizations. One challenge in linking organization level data is that unlike a person, there can be many definitions for an entity. For example, for hospitals, depending on the dataset, an entity might represent any one of the following similar but different semantic types:(1) physical units, (2) billing units (3) legal units, (5) licensed units, or (5) reporting units. How these different entities relate to each other can be complex such as one billing unit can span across many physical units or multiple billing units can exist for one physical unit. Thus, linking organization level data requires human involvement to sort through these issues in heterogeneous data sources to make informed decisions on the messy data. We design and evaluate a general framework for a hybrid Human-Machine process for ongoing integration and cleaning of hospital level data when no common identifiers exist such that we highlight the decisions that need human judgement and document and track the full processes to ensure reproducibility. Such ongoing integration is often called incremental record linkage (RL). ApproachAccurate linkage in big data requires well-defined tasks that need automatic or human processing. In the human computer interaction (HCI) field, Human Intelligence Tasks (HITs) are defined as micro tasks requiring human judgment and are often used in designing crowdsourcing systems. We designed HITs for linking organization level data and embed them into automatic deterministic linkage algorithms that supports interactive stepwise RL. The hybrid system is a framework for reproducible incremental RL. ResultsWe illustrate this framework by integrating four databases of hospitals in Texas from 2008 to 2014(N=664). The IDs used in the databases are the Texas Provider ID, the National Provider ID, the Medicare ID, and the Facility ID. We link the databases using provider name, including dba (i.e., doing business as), addresses, and phone numbers. Similarities in hospital names and addresses and the dynamic nature of hospital attributes over time make it impossible to build a fully automated linkage system for hospitals. Using our system to iteratively standardize and clean the data, we linked the hospitals with 100% precision using HITs that required confirming 79 approximate linkages and manually linking 28 hospitals. ConclusionEffective software that can support the interactive and iterative process of RL with well-designed HITs can streamline the linkage processes to support high quality replicable research using big data.


2021 ◽  
Author(s):  
KMA Solaiman ◽  
Tao Sun ◽  
Alina Nesen ◽  
Bharat Bhargava ◽  
Michael Stonebraker

We present a system for integrating multiple sources of data for finding missing persons. This system can assist authorities in finding children during amber alerts, mentally challenged persons who have wandered off, or person-of-interests in an investigation. Authorities search for the person in question by reaching out to acquaintances, checking video feeds, or by looking into the previous histories relevant to the investigation. In the absence of any leads, authorities lean on public help from sources such as tweets or tip lines. A missing person investigation requires information from multiple modalities and heterogeneous data sources to be combined.<div>Existing cross-modal fusion models use separate information models for each data modality and lack the compatibility to utilize pre-existing object properties in an application domain. A framework for multimodal information retrieval, called Find-Them is developed. It includes extracting features from different modalities and mapping them into a standard schema for context-based data fusion. Find-Them can integrate application domains with previously derived object properties and can deliver data relevant for the mission objective based on the context and needs of the user. Measurements on a novel open-world cross-media dataset show the efficacy of our model. The objective of this work is to assist authorities in finding uses of Find-Them in missing person investigation.</div>


2012 ◽  
Vol 263-266 ◽  
pp. 1947-1952
Author(s):  
Chun Ming Pei ◽  
Ling Li

An external insulation of contaminated-insulator assessment method is proposed based on a tri-level data fusion model of combining principal component analysis (PCA) method, artificial neural net (ANN) method and evidence theory in this paper. When contaminated-insulators partial discharge (PD) occur, much effective information obtained from the sound emitted with PD are synthesized to evaluate the external insulation strength of insulators in operation by the studied method. Firstly, nine characteristic parameters that can rapidly reflect the PD process are selected for image-level fusion of PCA to reduce dimension, which gets two new parameters. Then the new parameters are inputted to ANN for feature-level fusion. Finally, the feature-level fusion output is used as the input of decision-level fusion and fused by means of D-S evidence theory for further reducing the uncertainty of assessment. The artificial contamination experiments were explored to verify the proposed method. The result indicates that the proposed model is more precise than the ANN model under the same conditions.


Sign in / Sign up

Export Citation Format

Share Document