scholarly journals On Geometric Alignment in Low Doubling Dimension

Author(s):  
Hu Ding ◽  
Mingquan Ye

In real-world, many problems can be formulated as the alignment between two geometric patterns. Previously, a great amount of research focus on the alignment of 2D or 3D patterns, especially in the field of computer vision. Recently, the alignment of geometric patterns in high dimension finds several novel applications, and has attracted more and more attentions. However, the research is still rather limited in terms of algorithms. To the best of our knowledge, most existing approaches for high dimensional alignment are just simple extensions of their counterparts for 2D and 3D cases, and often suffer from the issues such as high complexities. In this paper, we propose an effective framework to compress the high dimensional geometric patterns and approximately preserve the alignment quality. As a consequence, existing alignment approach can be applied to the compressed geometric patterns and thus the time complexity is significantly reduced. Our idea is inspired by the observation that high dimensional data often has a low intrinsic dimension. We adopt the widely used notion “doubling dimension” to measure the extents of our compression and the resulting approximation. Finally, we test our method on both random and real datasets; the experimental results reveal that running the alignment algorithm on compressed patterns can achieve similar qualities, comparing with the results on the original patterns, but the running times (including the times cost for compression) are substantially lower.

2021 ◽  
Author(s):  
R. Priyadarshini ◽  
K. Anuratha ◽  
N. Rajendran ◽  
S. Sujeetha

Anamoly is an uncommon and it represents an outlier i.e, a nonconforming case. According to Oxford Dictionary of Mathematics anamoly is defined as an unusal and erroneous observation that usually doesn’t follow the general pattern of drawn population. The process of detecting the anmolies is a process of data mining and it aims at finding the data points or patterns that do not adapt with the actual complete pattern of the data.The study on anamoly behavior and its impact has been done on areas such as Network Security, Finance, Healthcare and Earth Sciences etc. The proper detection and prediction of anamolies are of great importance as these rare observations may carry siginificant information. In today’s finanicial world, the enterprise data is digitized and stored in the cloudand so there is a significant need to detect the anaomalies in financial data which will help the enterprises to deal with the huge amount of auditing The corporate and enterprise is conducting auidts on large number of ledgers and journal entries. The monitoring of those kinds of auidts is performed manually most of the times. There should be proper anamoly detection in the high dimensional data published in the ledger format for auditing purpose. This work aims at analyzing and predicting unusal fraudulent financial transations by emplyoing few Machine Learning and Deep Learning Methods. Even if any of the anamoly like manipulation or tampering of data detected, such anamolies and errors can be identified and marked with proper proof with the help of the machine learning based algorithms. The accuracy of the prediction is increased by 7% by implementing the proposed prediction models.


Author(s):  
Lavanya K ◽  
L.S.S. Reddy ◽  
B. Eswara Reddy

Multiple imputations (MI) are predominantly applied in such processes that are involved in the transaction of huge chunks of missing data. Multivariate data that follow traditional statistical models undergoes great suffering for the inadequate availability of pertinent data. The field of distributed computing research faces the biggest hurdle in the form of insufficient high dimensional multivariate data. It mainly deals with the analysis of parallel input problems found in the cloud computing network in general and evaluation of high-performance computing in particular. In fact, it is a tough task to utilize parallel multiple input methods for accomplishing remarkable performance as well as allowing huge datasets achieves scale. In this regard, it is essential that a credible data system is developed and a decomposition strategy is used to partition workload in the entire process for minimum data dependence. Subsequently, a moderate synchronization and/or meager communication liability is followed for placing parallel impute methods for achieving scale as well as more processes. The present article proposes many novel applications for better efficiency. As the first step, this article suggests distributed-oriented serial regression multiple imputation for enhancing the efficiency of imputation task in high dimensional multivariate normal data. As the next step, the processes done in three diverse with parallel back ends viz. Multiple imputation that used the socket method to serve serial regression and the Fork Method to distribute work over workers, and also same work experiments in dynamic structure with a load balance mechanism. In the end, the set of distributed MI methods are used to experimentally analyze amplitude of imputation scores spanning across three probable scenarios in the range of 1:500. Further, the study makes an important observation that due to the efficiency of numerous imputation methods, the data is arranged proportionately in a missing range of 10% to 50%, low to high, while dealing with data between 1000 and 100,000 samples. The experiments are done in a cloud environment and demonstrate that it is possible to generate a decent speed by lessening the repetitive communication between processors.


2019 ◽  
Vol 11 (2) ◽  
pp. 47-62 ◽  
Author(s):  
Xinchao Huang ◽  
Zihan Liu ◽  
Wei Lu ◽  
Hongmei Liu ◽  
Shijun Xiang

Detecting digital audio forgeries is a significant research focus in the field of audio forensics. In this article, the authors focus on a special form of digital audio forgery—copy-move—and propose a fast and effective method to detect doctored audios. First, the article segments the input audio data into syllables by voice activity detection and syllable detection. Second, the authors select the points in the frequency domain as feature by applying discrete Fourier transform (DFT) to each audio segment. Furthermore, this article sorts every segment according to the features and gets a sorted list of audio segments. In the end, the article merely compares one segment with some adjacent segments in the sorted list so that the time complexity is decreased. After comparisons with other state of the art methods, the results show that the proposed method can identify the authentication of the input audio and locate the forged position fast and effectively.


2013 ◽  
Vol 705 ◽  
pp. 596-601
Author(s):  
Xiao Jun Zhang ◽  
Chong Kang ◽  
Yi Chao Zhao ◽  
Yu Yuan Liu ◽  
Li Ming Fan ◽  
...  

In geophysical prospecting area, the issue how to obtain high precision and low scale geomagnetic charts has been a research focus all the times. In this article, the application based on normal Kriging method can be interpolation proved. Through interpolation and evaluation for the original geomagnetic grid based on Kriging algorithm, the new geomagnetic grid can be estimated by the way of data's transitivity. After that, the geomagnetic chart with high precision and low scale can finally be drawn. At the same time, its feasibility upon this sort of method has been verified.


2018 ◽  
Vol 27 (147) ◽  
pp. 170097 ◽  
Author(s):  
Harm A.W.M Tiddens ◽  
Wieying Kuo ◽  
Marcel van Straten ◽  
Pierluigi Ciet

Until recently, functional tests were the most important tools for the diagnosis and monitoring of lung diseases in the paediatric population. Chest imaging has gained considerable importance for paediatric pulmonology as a diagnostic and monitoring tool to evaluate lung structure over the past decade. Since January 2016, a large number of papers have been published on innovations in chest computed tomography (CT) and/or magnetic resonance imaging (MRI) technology, acquisition techniques, image analysis strategies and their application in different disease areas. Together, these papers underline the importance and potential of chest imaging and image analysis for today's paediatric pulmonology practice. The focus of this review is chest CT and MRI, as these are, and will be, the modalities that will be increasingly used by most practices. Special attention is given to standardisation of image acquisition, image analysis and novel applications in chest MRI. The publications discussed underline the need for the paediatric pulmonology community to implement and integrate state-of-the-art imaging and image analysis modalities into their structure–function laboratory for the benefit of their patients.


Author(s):  
Wei He ◽  
Liyuan Zhang ◽  
Huamin Yang ◽  
Zhengang Jiang ◽  
Huimao Zhang ◽  
...  

Graph cuts is an image segmentation method by which the region and boundary information of objects can be revolved comprehensively. Because of the complex spatial characteristics of high-dimensional images, time complexity and segmentation accuracy of graph cuts methods for high-dimensional images need to be improved. This paper proposes a new three-dimensional multilevel banded graph cuts model to increase its accuracy and reduce its complexity. Firstly, three-dimensional image is viewed as a high-dimensional space to construct three-dimensional network graphs. A pyramid image sequence is created by Gaussian pyramid downsampling procedure. Then, a new energy function is built according to the spatial characteristics of the three-dimensional image, in which the adjacent points are expressed by using a 26-connected system. At last, the banded graph is constructed on a narrow band around the object/background. The graph cuts method is performed on the banded graph layer by layer to obtain the object region sequentially. In order to verify the proposed method, we have performed an experiment on a set of three-dimensional colon CT images, and compared the results with local region active contour and Chan–Vese model. The experimental results demonstrate that the proposed method can segment colon tissues from three-dimensional abdominal CT images accurately. The segmentation accuracy can be increased to 95.1% and the time complexity is reduced by about 30% of the other two methods.


Author(s):  
Desmond Schmidt

Multi-Version Documents or MVDs, as described in Schmidt and Colomb (Schm09), provide a simple format for representing overlapping structures in digital text. They permit the reuse of existing technologies, such as XML, to encode the content of individual versions, while allowing overlapping hierarchies (separate, partial or conditional) and textual variation (insertions, deletions, alternatives and transpositions) to exist within the same document. Most desired operations on MVDs may be performed by simple algorithms in linear time. However, creating and editing MVDs is a much harder and more complex operation that resembles the multiple-sequence alignment problem in biology. The inclusion of the transposition operation into the alignment process makes this a hard problem, with no solutions known to be both optimal and practical. However, a suitable heuristic algorithm can be devised, based in part on the most recent biological alignment programs, whose time complexity is quadratic in the worst case, and is often much faster. The results are satisfactory both in terms of speed and alignment quality. This means that MVDs can be considered as a practical and editable format suitable for representing many cases of overlapping structure in digital text.


2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
Haitao Song ◽  
Guangming Tang ◽  
Yifeng Sun ◽  
Zhanzhan Gao

Steganographic security is the research focus of steganography. Current steganography research emphasizes on the design of steganography algorithms, but the theoretical research about steganographic security measure is relatively lagging. This paper proposes a feasible image steganographic security measure based on high dimensional KL divergence. It is proved that steganographic security measure of higher dimensional KL divergence is more accurate. The correlation between neighborhood pixels is analyzed from the principle in imaging process and content characteristics, and it is concluded that 9-dimensional probability statistics are effective enough to be used as steganographic security measure. Then in order to reduce the computational complexity of high dimensional probability statistics and improve the feasibility of the security measure method, a security measure dimension reduction scheme is proposed by applying gradient to describe image textures. Experiments show that the proposed steganographic security measure method is feasible and effective and more accurate than measure method based on 4-dimensional probability statistics.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Alex J. Washburn ◽  
Ward C. Wheeler

Abstract Background Given a binary tree $\mathcal {T}$ T of n leaves, each leaf labeled by a string of length at most k, and a binary string alignment function ⊗, an implied alignment can be generated to describe the alignment of a dynamic homology for $\mathcal {T}$ T . This is done by first decorating each node of $\mathcal {T}$ T with an alignment context using ⊗, in a post-order traversal, then, during a subsequent pre-order traversal, inferring on which edges insertion and deletion events occurred using those internal node decorations. Results Previous descriptions of the implied alignment algorithm suggest a technique of “back-propagation” with time complexity $\mathcal {O}\left (k^{2} * n^{2}\right)$ O k 2 ∗ n 2 . Here we describe an implied alignment algorithm with complexity $\mathcal {O}\left (k * n^{2}\right)$ O k ∗ n 2 . For well-behaved data, such as molecular sequences, the runtime approaches the best-case complexity of Ω(k∗n). Conclusions The reduction in the time complexity of the algorithm dramatically improves both its utility in generating multiple sequence alignments and its heuristic utility.


1996 ◽  
Vol 07 (04) ◽  
pp. 429-435 ◽  
Author(s):  
XING PEI ◽  
FRANK MOSS

We discuss the well-known problems associated with efforts to detect and characterize chaos and other low dimensional dynamics in biological settings. We propose a new method which shows promise for addressing these problems, and we demonstrate its effectiveness in an experiment with the crayfish sensory system. Recordings of action potentials in this system are the data. We begin with a pair of assumptions: that the times of firings of neural action potentials are largely determined by high dimensional random processes or “noise”; and that most biological files are non stationary, so that only relatively short files can be obtained under approximately constant conditions. The method is thus statistical in nature. It is designed to recognize individual “events” in the form of particular sequences of time intervals between action potentials which are the signatures of certain well defined dynamical behaviors. We show that chaos can be distinguished from limit cycles, even when the dynamics is heavily contaminated with noise. Extracellular recordings from the crayfish caudal photoreceptor, obtained while hydrodynamically stimulating the array of hair receptors on the tailfan, are used to illustrate the method.


Sign in / Sign up

Export Citation Format

Share Document