A Comparative Study of XML Change Detection Algorithms

Author(s):  
Grégory Cobéna ◽  
Talel Abdessalem

Change detection is an important part of version management for databases and document archives. The success of XML has recently renewed interest in change detection on trees and semi-structured data, and various algorithms have been proposed. We study different algorithms and representations of changes based on their formal definition and on experiments conducted over XML data from the Web. Our goal is to provide an evaluation of the quality of the results, the performance of the tools and, based on this, guide the users in choosing the appropriate solution for their applications.

Author(s):  
Gulnaz Alimjan ◽  
Yiliyaer Jiaermuhamaiti ◽  
Huxidan Jumahong ◽  
Shuangling Zhu ◽  
Pazilat Nurmamat

Various UNet architecture-based image change detection algorithms promote the development of image change detection, but there are still some defects. First, under the encoder–decoder framework, the low-level features are extracted many times in multiple dimensions, which generates redundant information; second, the relationship between each feature layer is not modeled so sufficiently that it cannot produce the optimal feature differentiation representation. This paper proposes a remote image change detection algorithm based on the multi-feature self-attention fusion mechanism UNet network, abbreviated as MFSAF UNet (multi-feature self-attention fusion UNet). We attempt to add multi-feature self-attention mechanism between the encoder and decoder of UNet to obtain richer context dependence and overcome the two above-mentioned restrictions. Since the capacity of convolution-based UNet network is directly proportional to network depth, and a deeper convolutional network means more training parameters, so the convolution of each layer of UNet is replaced as a separated convolution, which makes the entire network to be lighter and the model’s execution efficiency is slightly better than the traditional convolution operation. In addition to these, another innovation point of this paper is using preference to control loss function and meet the demands for different accuracies and recall rates. The simulation test results verify the validity and robustness of this approach.


Author(s):  
Vijay R Sonawane ◽  
D.R. Rao

<span>The efficient management of the dynamic XML documents is a complex area of research. The changes and size of the XML documents throughout its lifetime are limitless. Change detection is an important part of version management to identify difference between successive versions of a document. Document content is continuously evolving. Users wanted to be able to query previous versions, query changes in documents, as well as to retrieve a particular document version efficiently. In this paper we provide comprehensive comparative analysis of various control schemes for change detection and querying dynamic XML documents.</span>


Author(s):  
A. W. Lyda ◽  
X. Zhang ◽  
C. L. Glennie ◽  
K. Hudnut ◽  
B. A. Brooks

Remote sensing via LiDAR (Light Detection And Ranging) has proven extremely useful in both Earth science and hazard related studies. Surveys taken before and after an earthquake for example, can provide decimeter-level, 3D near-field estimates of land deformation that offer better spatial coverage of the near field rupture zone than other geodetic methods (e.g., InSAR, GNSS, or alignment array). In this study, we compare and contrast estimates of deformation obtained from different pre and post-event airborne laser scanning (ALS) data sets of the 2014 South Napa Earthquake using two change detection algorithms, Iterative Control Point (ICP) and Particle Image Velocimetry (PIV). The ICP algorithm is a closest point based registration algorithm that can iteratively acquire three dimensional deformations from airborne LiDAR data sets. By employing a newly proposed partition scheme, “moving window,” to handle the large spatial scale point cloud over the earthquake rupture area, the ICP process applies a rigid registration of data sets within an overlapped window to enhance the change detection results of the local, spatially varying surface deformation near-fault. The other algorithm, PIV, is a well-established, two dimensional image co-registration and correlation technique developed in fluid mechanics research and later applied to geotechnical studies. Adapted here for an earthquake with little vertical movement, the 3D point cloud is interpolated into a 2D DTM image and horizontal deformation is determined by assessing the cross-correlation of interrogation areas within the images to find the most likely deformation between two areas. Both the PIV process and the ICP algorithm are further benefited by a presented, novel use of urban geodetic markers. Analogous to the persistent scatterer technique employed with differential radar observations, this new LiDAR application exploits a classified point cloud dataset to assist the change detection algorithms. Ground deformation results and statistics from these techniques are presented and discussed here with supplementary analyses of the differences between techniques and the effects of temporal spacing between LiDAR datasets. Results show that both change detection methods provide consistent near field deformation comparable to field observed offsets. The deformation can vary in quality but estimated standard deviations are always below thirty one centimeters. This variation in quality differentiates the methods and proves that factors such as geodetic markers and temporal spacing play major roles in the outcomes of ALS change detection surveys.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


Author(s):  
Ana C. F. Fabrin ◽  
Ricardo D. Molin ◽  
Dimas I. Alves ◽  
Renato Machado ◽  
Fabio M. Bayer ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document