scholarly journals Measuring Similarity of Deforestation Patterns in Time and Space across Differences in Resolution

Geomatics ◽  
2021 ◽  
Vol 1 (4) ◽  
pp. 464-495
Author(s):  
Desi Suyamto ◽  
Lilik Prasetyo ◽  
Yudi Setiawan ◽  
Arief Wijaya ◽  
Kustiyo Kustiyo ◽  
...  

This article demonstrated an easily applicable method for measuring the similarity between a pair of point patterns, which applies to spatial or temporal data sets. Such a measurement was performed using similarity-based pattern analysis as an alternative to conventional approaches, which typically utilize straightforward point-to-point matching. Using our approach, in each point data set, two geometric features (i.e., the distance and angle from the centroid) were calculated and represented as probability density functions (PDFs). The PDF similarity of each geometric feature was measured using nine metrics, with values ranging from zero (very contrasting) to one (exactly the same). The overall similarity was defined as the average of the distance and angle similarities. In terms of sensibility, the method was shown to be capable of measuring, at a human visual sensing level, two pairs of hypothetical patterns, presenting reasonable results. Meanwhile, in terms of the method′s sensitivity to both spatial and temporal displacements from the hypothetical origin, the method is also capable of consistently measuring the similarity of spatial and temporal patterns. The application of the method to assess both spatial and temporal pattern similarities between two deforestation data sets with different resolutions was also discussed.

1994 ◽  
Vol 24 (9) ◽  
pp. 1782-1790 ◽  
Author(s):  
Jean-François Dhôte ◽  
Éric de Hercé

A hyperbolic model is proposed for the construction of sets of height–diameter curves in even-aged stands. On the basis of 86 samples from pure stands of beech (Fagussilvatica L.) and oak (Quercuspetraea (Matt.) Liebl), this model fitted adequately the geometry of data sets. The qualitative behaviour is correct over the whole range of the independent variable. Each parameter characterizes a significant geometric feature of the curve. The three parameters correspond to the asymptote, the slope at the origin, and the curve shape (curvature). The latter two are fairly stable over a large range of age (30–150 years) and stand density. A fitting procedure is proposed, through step-by-step reductions of the model, to overcome the limitations of poorly conditioned samples; only the asymptote, which is very close to top height, is to be estimated from each data set. The time series of estimates exhibit satisfactory evolutions for a large age interval. We interpret the shape of curve sets as the consequence of dominance on height and diameter growth in hierarchized stands.


2013 ◽  
Vol 2013 ◽  
pp. 1-12
Author(s):  
Yong Chen ◽  
Lei Shang ◽  
Eric Hu

As for the unsatisfactory accuracy caused by SIFT (scale-invariant feature transform) in complicated image matching, a novel matching method on multiple layered strategies is proposed in this paper. Firstly, the coarse data sets are filtered by Euclidean distance. Next, geometric feature consistency constraint is adopted to refine the corresponding feature points, discarding the points with uncoordinated slope values. Thirdly, scale and orientation clustering constraint method is proposed to precisely choose the matching points. The scale and orientation differences are employed as the elements ofk-means clustering in the method. Thus, two sets of feature points and the refined data set are obtained. Finally, 3 * delta rule of the refined data set is used to search all the remaining points. Our multiple layered strategies make full use of feature constraint rules to improve the matching accuracy of SIFT algorithm. The proposed matching method is compared to the traditional SIFT descriptor in various tests. The experimental results show that the proposed method outperforms the traditional SIFT algorithm with respect to correction ratio and repeatability.


2012 ◽  
Vol 433-440 ◽  
pp. 4725-4729
Author(s):  
Qi Ming Wei ◽  
Wei Yong Wu

Automatic Automatic 3-D unorganized point data registration technique,which maps 3-D datas measured from multiple viewpoints into a common coordinate space, is a key technique for reverse engineering.In order to improve data matching speed, a parallel detecting algorithm with local geometric feature based on CUDA architechure was proposed.Firstly,local geometric feature points were extracted from original data sets,and then the correspondence between them are computed, at last this registration algorithm was implemented in parallel pattern on CUDA.The comparison experiments show that this algorithm is efficient and robust against noise.


2021 ◽  
Author(s):  
Tiziano Tirabassi ◽  
Daniela Buske

The recording of air pollution concentration values involves the measurement of a large volume of data. Generally, automatic selectors and explicators are provided by statistics. The use of the Representative Day allows the compilation of large amounts of data in a compact format that will supply meaningful information on the whole data set. The Representative Day (RD) is a real day that best represents (in the meaning of the least squares technique) the set of daily trends of the considered time series. The Least Representative Day (LRD), on the contrary, it is a real day that worst represents (in the meaning of the least squares technique) the set of daily trends of the same time series. The identification of RD and LRD can prove to be a very important tool for identifying both anomalous and standard behaviors of pollutants within the selected period and establishing measures of prevention, limitation and control. Two application examples, in two different areas, are presented related to meteorological and SO 2 and O 3 concentration data sets.


2020 ◽  
Vol 86 (1) ◽  
pp. 23-31
Author(s):  
Hessah Albanwan ◽  
Rongjun Qin ◽  
Xiaohu Lu ◽  
Mao Li ◽  
Desheng Liu ◽  
...  

The current practice in land cover/land use change analysis relies heavily on the individually classified maps of the multi-temporal data set. Due to varying acquisition conditions (e.g., illumination, sensors, seasonal differences), the classification maps yielded are often inconsistent through time for robust statistical analysis. 3D geometric features have been shown to be stable for assessing differences across the temporal data set. Therefore, in this article we investigate the use of a multi-temporal orthophoto and digital surface model derived from satellite data for spatiotemporal classification. Our approach consists of two major steps: generating per-class probability distribution maps using the random-forest classifier with limited training samples, and making spatiotemporal inferences using an iterative 3D spatiotemporal filter operating on per-class probability maps. Our experimental results demonstrate that the proposed methods can consistently improve the individual classification results by 2%–6% and thus can be an important postclassification refinement approach.


2021 ◽  
Vol 55 (1) ◽  
pp. 55-71
Author(s):  
R. Kirsten ◽  
I. N. Fabris-Rotelli

Two spatial data sets are considered to be similar if they originate from the same stochastic process in terms of their spatial structure. Many tests have been developed over recent years to test the similarity of certain types of spatial data, such as spatial point patterns, geostatistical data and images. This research proposes a generic spatial similarity test able to handle various types of spatial data, for example images (modelled spatially), point patterns, marked point patterns, geostatistical data and lattice patterns. A simulation study is done in order to test the method for each spatial data set. After the simulation study, it was concluded that the proposed spatial similarity test is not sensitive to the user-defined resolution of the pixel image representation. From the simulation study, the proposed spatial similarity test performs well on lattice data, some of the unmarked point patterns and the marked point patterns with discrete marks. We illustrate this test on property prices in the City of Cape Town and the City of Johannesburg, South Africa.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


Sign in / Sign up

Export Citation Format

Share Document