scholarly journals Beyond Topological Persistence: Starting from Networks

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3079
Author(s):  
Mattia G. Bergomi ◽  
Massimo Ferri ◽  
Pietro Vertechi ◽  
Lorenzo Zuffi

Persistent homology enables fast and computable comparison of topological objects. We give some instances of a recent extension of the theory of persistence, guaranteeing robustness and computability for relevant data types, like simple graphs and digraphs. We focus on categorical persistence functions that allow us to study in full generality strong kinds of connectedness—clique communities, k-vertex, and k-edge connectedness—directly on simple graphs and strong connectedness in digraphs.

2021 ◽  
Vol 12 (1) ◽  
pp. 50
Author(s):  
Andrey Fedotov ◽  
Pavel Grishin ◽  
Dmitriy Ivonin ◽  
Mikhail Chernyavskiy ◽  
Eugene Grachev

Nowadays material science involves powerful 3D imaging techniques such as X-ray computed tomography that generates high-resolution images of different structures. These methods are widely used to reveal information about the internal structure of geological cores; therefore, there is a need to develop modern approaches for quantitative analysis of the obtained images, their comparison, and classification. Topological persistence is a useful technique for characterizing the internal structure of 3D images. We show how persistent data analysis provides a useful tool for the classification of porous media structure from 3D images of hydrocarbon reservoirs obtained using computed tomography. We propose a methodology of 3D structure classification based on geometry-topology analysis via persistent homology.


Author(s):  
J. Vauhkonen

Reconstruction of three-dimensional (3D) forest canopy is described and quantified using airborne laser scanning (ALS) data with densities of 0.6–0.8 points m<sup>-2</sup> and field measurements aggregated at resolutions of 400–900 m<sup>2</sup>. The reconstruction was based on computational geometry, topological connectivity, and numerical optimization. More precisely, triangulations and their filtrations, i.e. ordered sets of simplices belonging to the triangulations, based on the point data were analyzed. Triangulating the ALS point data corresponds to subdividing the underlying space of the points into weighted simplicial complexes with weights quantifying the (empty) space delimited by the points. Reconstructing the canopy volume populated by biomass will thus likely require filtering to exclude that volume from canopy voids. The approaches applied for this purpose were (i) to optimize the degree of filtration with respect to the field measurements, and (ii) to predict this degree by means of analyzing the persistent homology of the obtained triangulations, which is applied for the first time for vegetation point clouds. When derived from optimized filtrations, the total tetrahedral volume had a high degree of determination (R<sup>2</sup>) with the stem volume considered, both alone (R<sup>2</sup>=0.65) and together with other predictors (R<sup>2</sup>=0.78). When derived by analyzing the topological persistence of the point data and without any field input, the R<sup>2</sup> were lower, but the predictions still showed a correlation with the field-measured stem volumes. Finally, producing realistic visualizations of a forested landscape using the persistent homology approach is demonstrated.


Author(s):  
Prakash Rao

Image shifts in out-of-focus dark field images have been used in the past to determine, for example, epitaxial relationships in thin films. A recent extension of the use of dark field image shifts has been to out-of-focus images in conjunction with stereoviewing to produce an artificial stereo image effect. The technique, called through-focus dark field electron microscopy or 2-1/2D microscopy, basically involves obtaining two beam-tilted dark field images such that one is slightly over-focus and the other slightly under-focus, followed by examination of the two images through a conventional stereoviewer. The elevation differences so produced are usually unrelated to object positions in the thin foil and no specimen tilting is required.In order to produce this artificial stereo effect for the purpose of phase separation and identification, it is first necessary to select a region of the diffraction pattern containing more than just one discrete spot, with the objective aperture.


Author(s):  
M. A. Perumal ◽  
S. Navaneethakrishnan ◽  
A. Nagaraja ◽  
S. Arockiaraj

2018 ◽  
Author(s):  
Prathiba Natesan ◽  
Smita Mehta

Single case experimental designs (SCEDs) have become an indispensable methodology where randomized control trials may be impossible or even inappropriate. However, the nature of SCED data presents challenges for both visual and statistical analyses. Small sample sizes, autocorrelations, data types, and design types render many parametric statistical analyses and maximum likelihood approaches ineffective. The presence of autocorrelation decreases interrater reliability in visual analysis. The purpose of the present study is to demonstrate a newly developed model called the Bayesian unknown change-point (BUCP) model which overcomes all the above-mentioned data analytic challenges. This is the first study to formulate and demonstrate rate ratio effect size for autocorrelated data, which has remained an open question in SCED research until now. This expository study also compares and contrasts the results from BUCP model with visual analysis, and rate ratio effect size with nonoverlap of all pairs (NAP) effect size. Data from a comprehensive behavioral intervention are used for the demonstration.


2012 ◽  
Vol 10 (4) ◽  
pp. 202-215
Author(s):  
Manoel Agamemnon Lopes ◽  
Roberta Vilhena Vieira Lopes
Keyword(s):  

2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


Sign in / Sign up

Export Citation Format

Share Document