scholarly journals Minimal Cycle Representatives in Persistent Homology Using Linear Programming: An Empirical Study With User’s Guide

2021 ◽  
Vol 4 ◽  
Author(s):  
Lu Li ◽  
Connor Thompson ◽  
Gregory Henselman-Petrusek ◽  
Chad Giusti ◽  
Lori Ziegelmeier

Cycle representatives of persistent homology classes can be used to provide descriptions of topological features in data. However, the non-uniqueness of these representatives creates ambiguity and can lead to many different interpretations of the same set of classes. One approach to solving this problem is to optimize the choice of representative against some measure that is meaningful in the context of the data. In this work, we provide a study of the effectiveness and computational cost of several ℓ1 minimization optimization procedures for constructing homological cycle bases for persistent homology with rational coefficients in dimension one, including uniform-weighted and length-weighted edge-loss algorithms as well as uniform-weighted and area-weighted triangle-loss algorithms. We conduct these optimizations via standard linear programming methods, applying general-purpose solvers to optimize over column bases of simplicial boundary matrices. Our key findings are: 1) optimization is effective in reducing the size of cycle representatives, though the extent of the reduction varies according to the dimension and distribution of the underlying data, 2) the computational cost of optimizing a basis of cycle representatives exceeds the cost of computing such a basis, in most data sets we consider, 3) the choice of linear solvers matters a lot to the computation time of optimizing cycles, 4) the computation time of solving an integer program is not significantly longer than the computation time of solving a linear program for most of the cycle representatives, using the Gurobi linear solver, 5) strikingly, whether requiring integer solutions or not, we almost always obtain a solution with the same cost and almost all solutions found have entries in {‐1,0,1} and therefore, are also solutions to a restricted ℓ0 optimization problem, and 6) we obtain qualitatively different results for generators in Erdős-Rényi random clique complexes than in real-world and synthetic point cloud data.

Author(s):  
Anita Chaudhari ◽  
Rajesh Bansode

In today’s world everyone is using cloud services. Every user uploads his/her sensitive data on cloud in encrypted form. If user wants to perform any type of computation on cloud data, user has to share credentials with cloud administrator. Which puts data privacy on risk. If user does not share his/her credentials with cloud provider, user has to download all data and only then decryption process and computation can be performed. This research, focuses on ECC based homomorphic encryption scheme is good by considering communication and computational cost. Many ECC based schemes are presented to provide data privacy. Analysis of different approaches has been done by selecting different common parameters. Based on the analysis minimum computation time is 0.25 Second required for ECC based homomorphic encryption (HE).


2012 ◽  
Vol 2 (1) ◽  
pp. 7-9 ◽  
Author(s):  
Satinderjit Singh

Median filtering is a commonly used technique in image processing. The main problem of the median filter is its high computational cost (for sorting N pixels, the temporal complexity is O(N·log N), even with the most efficient sorting algorithms). When the median filter must be carried out in real time, the software implementation in general-purpose processorsdoes not usually give good results. This Paper presents an efficient algorithm for median filtering with a 3x3 filter kernel with only about 9 comparisons per pixel using spatial coherence between neighboring filter computations. The basic algorithm calculates two medians in one step and reuses sorted slices of three vertical neighboring pixels. An extension of this algorithm for 2D spatial coherence is also examined, which calculates four medians per step.


2020 ◽  
Vol 21 (S21) ◽  
Author(s):  
Jin Li ◽  
◽  
Chenyuan Bian ◽  
Dandan Chen ◽  
Xianglian Meng ◽  
...  

Abstract Background Although genetic risk factors and network-level neuroimaging abnormalities have shown effects on cognitive performance and brain atrophy in Alzheimer’s disease (AD), little is understood about how apolipoprotein E (APOE) ε4 allele, the best-known genetic risk for AD, affect brain connectivity before the onset of symptomatic AD. This study aims to investigate APOE ε4 effects on brain connectivity from the perspective of multimodal connectome. Results Here, we propose a novel multimodal brain network modeling framework and a network quantification method based on persistent homology for identifying APOE ε4-related network differences. Specifically, we employ sparse representation to integrate multimodal brain network information derived from both the resting state functional magnetic resonance imaging (rs-fMRI) data and the diffusion-weighted magnetic resonance imaging (dw-MRI) data. Moreover, persistent homology is proposed to avoid the ad hoc selection of a specific regularization parameter and to capture valuable brain connectivity patterns from the topological perspective. The experimental results demonstrate that our method outperforms the competing methods, and reasonably yields connectomic patterns specific to APOE ε4 carriers and non-carriers. Conclusions We have proposed a multimodal framework that integrates structural and functional connectivity information for constructing a fused brain network with greater discriminative power. Using persistent homology to extract topological features from the fused brain network, our method can effectively identify APOE ε4-related brain connectomic biomarkers.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. V99-V113 ◽  
Author(s):  
Zhong-Xiao Li ◽  
Zhen-Chun Li

After multiple prediction, adaptive multiple subtraction is essential for the success of multiple removal. The 3D blind separation of convolved mixtures (3D BSCM) method, which is effective in conducting adaptive multiple subtraction, needs to solve an optimization problem containing L1-norm minimization constraints on primaries by the iterative reweighted least-squares (IRLS) algorithm. The 3D BSCM method can better separate primaries and multiples than the 1D/2D BSCM method and the method with energy minimization constraints on primaries. However, the 3D BSCM method has high computational cost because the IRLS algorithm achieves nonquadratic optimization with an LS optimization problem solved in each iteration. In general, it is good to have a faster 3D BSCM method. To improve the adaptability of field data processing, the fast iterative shrinkage thresholding algorithm (FISTA) is introduced into the 3D BSCM method. The proximity operator of FISTA can solve the L1-norm minimization problem efficiently. We demonstrate that our FISTA-based 3D BSCM method achieves similar accuracy of estimating primaries as that of the reference IRLS-based 3D BSCM method. Furthermore, our FISTA-based 3D BSCM method reduces computation time by approximately 60% compared with the reference IRLS-based 3D BSCM method in the synthetic and field data examples.


Author(s):  
Álinson S. Xavier ◽  
Ricardo Fukasawa ◽  
Laurent Poirrier

When generating multirow intersection cuts for mixed-integer linear optimization problems, an important practical question is deciding which intersection cuts to use. Even when restricted to cuts that are facet defining for the corner relaxation, the number of potential candidates is still very large, especially for instances of large size. In this paper, we introduce a subset of intersection cuts based on the infinity norm that is very small, works for relaxations having arbitrary number of rows and, unlike many subclasses studied in the literature, takes into account the entire data from the simplex tableau. We describe an algorithm for generating these inequalities and run extensive computational experiments in order to evaluate their practical effectiveness in real-world instances. We conclude that this subset of inequalities yields, in terms of gap closure, around 50% of the benefits of using all valid inequalities for the corner relaxation simultaneously, but at a small fraction of the computational cost, and with a very small number of cuts. Summary of Contribution: Cutting planes are one of the most important techniques used by modern mixed-integer linear programming solvers when solving a variety of challenging operations research problems. The paper advances the state of the art on general-purpose multirow intersection cuts by proposing a practical and computationally friendly method to generate them.


2005 ◽  
Vol 15 (1) ◽  
pp. 15-24 ◽  
Author(s):  
Leo Liberti ◽  
Edoardo Amaldi ◽  
Francesco Maffioli ◽  
Nelson Maculan

The problem of finding a fundamental cycle basis with minimum total cost in a graph arises in many application fields. In this paper we present some integer linear programming formulations and we compare their performances, in terms of instance size, CPU time required for the solution, and quality of the associated lower bound derived by solving the corresponding continuous relaxations. Since only very small instances can be solved to optimality with these formulations and very large instances occur in a number of applications, we present a new constructive heuristic and compare it with alternative heuristics.


2021 ◽  
Author(s):  
Jaekwang Shin ◽  
Ankush Bansal ◽  
Randy Cheng ◽  
Alan Taub ◽  
Mihaela Banu

Accurate prediction of the defects occurring in incrementally formed parts has been gaining attention in recent years. This interest is because accurate predictions can overcome the limitation in the advancement of incremental forming in industrial-scale implementation, which has been held back by the increase in the cost and development time due to trial and error methods. The finite element method has been widely utilized to predict the defects in the formed part, e.g., bulge. However, the computation time of running these models and their mesh-size dependency in predicting the forming defects represent barriers in adopting these models as part of CAD-FEM-CAE platforms. Thus, robust analytical and data-driven algorithms must be developed for a cost-effective design of complex parts. In this paper, a new analytical model is proposed to predict the bulge location and geometry in two point incremental forming of an aerospace aluminum alloy AA7075-O for a 67° truncated cone. First, the algorithm calculates the region of interest based on the part geometry. A novel shape function and weighted summation method are then utilized to calculate the amplitude of the instability produced by material accumulation during forming, leading to a bulge on the unformed portion of the sample. It was found that the geometric profile of the part influences the shape function, which is a function created to incorporate the effects of process parameter and boundary condition. The calculated profile in each direction is finalized into one 3-dimensional profile, compared with the experimental results for validation. The proposed model has proven to predict an accurate bulge profile with 95% accuracy comparing with experiments with less than 5% computational cost of FEM modeling.


2021 ◽  
Author(s):  
Dong Quan Ngoc Nguyen ◽  
Phuong Dong Tan Le ◽  
Lin Xing ◽  
Lizhen Lin

AbstractMethods for analyzing similarities among DNA sequences play a fundamental role in computational biology, and have a variety of applications in public health, and in the field of genetics. In this paper, a novel geometric and topological method for analyzing similarities among DNA sequences is developed, based on persistent homology from algebraic topology, in combination with chaos geometry in 4-dimensional space as a graphical representation of DNA sequences. Our topological framework for DNA similarity analysis is general, alignment-free, and can deal with DNA sequences of various lengths, while proving first-of-the-kind visualization features for visual inspection of DNA sequences directly, based on topological features of point clouds that represent DNA sequences. As an application, we test our methods on three datasets including genome sequences of different types of Hantavirus, Influenza A viruses, and Human Papillomavirus.


2021 ◽  
Author(s):  
Ho Yin Yuen ◽  
Jesper Jansson

Abstract Background: Protein-protein interaction (PPI) data is an important type of data used in functional genomics. However, inaccuracies in high-throughput experiments often result in incomplete PPI data. Computational techniques are thus used to infer missing data and to evaluate confidence scores, with link prediction being one such approach that uses the structure of the network of PPIs known so far to find good candidates for missing PPIs. Recently, a new idea called the L3 principle introduced biological motivation into PPI link predictions, yielding predictors that are superior to general-purpose link predictors for complex networks. However, the previously developed L3 principle-based link predictors are only an approximate implementation of the L3 principle. As such, not only is the full potential of the L3 principle not realized, they may even lead to candidate PPIs that otherwise fit the L3 principle being penalized. Result: In this article, we propose a formulation of link predictors without approximation that we call ExactL3 (L3E) by addressing missing elements within L3 predictors in the perspective of network modeling. Through statistical and biological metrics, we show that in general, L3E predictors perform better than the previously proposed methods on seven datasets across two organisms (human and yeast) using a reasonable amount of computation time. In addition to L3E being able to rank the PPIs more accurately, we also found that L3-based predictors, including L3E, predicted a different pool of real PPIs than the general-purpose link predictors. This suggests that different types of PPIs can be predicted based on different topological assumptions and that even better PPI link predictors may be obtained in the future by improved network modeling.


Author(s):  
Anil Kakarla ◽  
Sanjeev Agarwal ◽  
Sanjay Kumar Madria

Information processing and collaborative computing using agents over a distributed network of heterogeneous platforms are important for many defense and civil applications. In this chapter, a mobile agent based collaborative and distributed computing framework for network centric information processing is presented using a military application. In this environment, the challenge is to continue processing efficiently while satisfying multiple constraints like computational cost, communication bandwidth, and energy in a distributed network. The authors use mobile agent technology for distributed computing to speed up data processing using the available systems resources in the network. The proposed framework provides a mechanism to bridge the gap between computation resources and dispersed data sources under variable bandwidth constraints. For every computation task raised in the network, a viable system that has resources and data to compute the task is identified and sent to the viable system for completion. Experimental evaluation under the real platform is reported. It shows that in spite of an increase of the communication load in comparison with other solutions the proposed framework leads to a decrease of the computation time.


Sign in / Sign up

Export Citation Format

Share Document