scholarly journals Robust Visibility Surface Determination in Object Space via Plücker Coordinates

2021 ◽  
Vol 7 (6) ◽  
pp. 96
Author(s):  
Alessandro Rossi ◽  
Marco Barbiero ◽  
Paolo Scremin ◽  
Ruggero Carli

Industrial 3D models are usually characterized by a large number of hidden faces and it is very important to simplify them. Visible-surface determination methods provide one of the most common solutions to the visibility problem. This study presents a robust technique to address the global visibility problem in object space that guarantees theoretical convergence to the optimal result. More specifically, we propose a strategy that, in a finite number of steps, determines if each face of the mesh is globally visible or not. The proposed method is based on the use of Plücker coordinates that allows it to provide an efficient way to determine the intersection between a ray and a triangle. This algorithm does not require pre-calculations such as estimating the normal at each face: this implies the resilience to normals orientation. We compared the performance of the proposed algorithm against a state-of-the-art technique. Results showed that our approach is more robust in terms of convergence to the maximum lossless compression.

Author(s):  
Nurshazwani Muhamad Mahfuz ◽  
Marina Yusoff ◽  
Zakiah Ahmad

<div style="’text-align: justify;">Clustering provides a prime important role as an unsupervised learning method in data analytics to assist many real-world problems such as image segmentation, object recognition or information retrieval. It is often an issue of difficulty for traditional clustering technique due to non-optimal result exist because of the presence of outliers and noise data.  This review paper provides a review of single clustering methods that were applied in various domains.  The aim is to see the potential suitable applications and aspect of improvement of the methods. Three categories of single clustering methods were suggested, and it would be beneficial to the researcher to see the clustering aspects as well as to determine the requirement for clustering method for an employment based on the state of the art of the previous research findings.</div>


Author(s):  
Roberto Cipolla ◽  
Kwan-Yee K. Wong

This chapter discusses profiles or outlines which are dominant features of images. Profiles can be extracted easily and reliably from the images and can provide information on the shape and motion of an object. Classical techniques for motion estimation and model reconstruction are highly dependent on point and line correspondences, hence they cannot be applied directly to profiles which are viewpoint dependent. The limitations of classical techniques paved the way for the creation of different sets of algorithms specific to profiles. In this chapter, the focus is on state-of-the-art algorithms for model reconstruction and model estimation from profiles. These new sets of algorithms are capable of reconstructing any kind of objects including smooth and textureless surfaces. They also render convincing 3D models, reinforcing the practicality of the algorithm.


2016 ◽  
Vol 26 (4) ◽  
pp. 885-903 ◽  
Author(s):  
Emiliano Pérez ◽  
Santiago Salamanca ◽  
Pilar Merchán ◽  
Antonio Adán

Abstract This paper presents a review of the most relevant current techniques that deal with hole-filling in 3D models. Contrary to earlier reports, which approach mesh repairing in a sparse and global manner, the objective of this review is twofold. First, a specific and comprehensive review of hole-filling techniques (as a relevant part in the field of mesh repairing) is carried out. We present a brief summary of each technique with attention paid to its algorithmic essence, main contributions and limitations. Second, a solid comparison between 34 methods is established. To do this, we define 19 possible meaningful features and properties that can be found in a generic hole-filling process. Then, we use these features to assess the virtues and deficiencies of the method and to build comparative tables. The purpose of this review is to make a comparative hole-filling state-of-the-art available to researchers, showing pros and cons in a common framework.


Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Peixian Liang ◽  
Zhuo Zhao ◽  
...  

3D image segmentation plays an important role in biomedical image analysis. Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets. Yet, 2D and 3D models have their own strengths and weaknesses, and by unifying them together, one may be able to achieve more accurate results. In this paper, we propose a new ensemble learning framework for 3D biomedical image segmentation that combines the merits of 2D and 3D models. First, we develop a fully convolutional network based meta-learner to learn how to improve the results from 2D and 3D models (base-learners). Then, to minimize over-fitting for our sophisticated meta-learner, we devise a new training method that uses the results of the baselearners as multiple versions of “ground truths”. Furthermore, since our new meta-learner training scheme does not depend on manual annotation, it can utilize abundant unlabeled 3D image data to further improve the model. Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset and the mouse piriform cortex dataset) show that our approach is effective under fully-supervised, semisupervised, and transductive settings, and attains superior performance over state-of-the-art image segmentation methods.


2019 ◽  
Vol 8 (5) ◽  
pp. 221 ◽  
Author(s):  
Arttu Julin ◽  
Kaisa Jaalama ◽  
Juho-Pekka Virtanen ◽  
Mikko Maksimainen ◽  
Matti Kurkela ◽  
...  

The Internet has become a major dissemination and sharing platform for 3D content. The utilization of 3D measurement methods can drastically increase the production efficiency of 3D content in an increasing number of use cases where 3D documentation of real-life objects or environments is required. We demonstrated a developed, highly automated and integrated content creation process of providing reality-based photorealistic 3D models for the web. Close-range photogrammetry, terrestrial laser scanning (TLS) and their combination are compared using available state-of-the-art tools in a real-life project setting with real-life limitations. Integrating photogrammetry and TLS is a good compromise for both geometric and texture quality. Compared to approaches using only photogrammetry or TLS, it is slower and more resource-heavy but combines complementary advantages of each method, such as direct scale determination from TLS or superior image quality typically used in photogrammetry. The integration is not only beneficial, but clearly productionally possible using available state-of-the-art tools that have become increasingly available also for non-expert users. Despite the high degree of automation, some manual editing steps are still required in practice to achieve satisfactory results in terms of adequate visual quality. This is mainly due to the current limitations of WebGL technology.


Author(s):  
Amatur Rahman ◽  
Paul Medvedev

AbstractGiven the popularity and elegance of k-mer based tools, finding a space-efficient way to represent a set of k-mers is important for improving the scalability of bioinformatics analyses. One popular approach is to convert the set of k-mers into the more compact set of unitigs. We generalize this approach and formulate it as the problem of finding a smallest spectrum-preserving string set (SPSS) representation. We show that this problem is equivalent to finding a smallest path cover in a compacted de Bruijn graph. Using this reduction, we prove a lower bound on the size of the optimal SPSS and propose a greedy method called UST that results in a smaller representation than unitigs and is nearly optimal with respect to our lower bound. We demonstrate the usefulness of the SPSS formulation with two applications of UST. The first one is a compression algorithm, UST-Compress, which we show can store a set of k-mers using an order-of-magnitude less disk space than other lossless compression tools. The second one is an exact static k-mer membership index, UST-FM, which we show improves index size by 10-44% compared to other state-of-the-art low memory indices. Our tool is publicly available at: https://github.com/medvedevgroup/UST/.


2020 ◽  
Vol 34 (07) ◽  
pp. 12516-12523
Author(s):  
Qingshan Xu ◽  
Wenbing Tao

The completeness of 3D models is still a challenging problem in multi-view stereo (MVS) due to the unreliable photometric consistency in low-textured areas. Since low-textured areas usually exhibit strong planarity, planar models are advantageous to the depth estimation of low-textured areas. On the other hand, PatchMatch multi-view stereo is very efficient for its sampling and propagation scheme. By taking advantage of planar models and PatchMatch multi-view stereo, we propose a planar prior assisted PatchMatch multi-view stereo framework in this paper. In detail, we utilize a probabilistic graphical model to embed planar models into PatchMatch multi-view stereo and contribute a novel multi-view aggregated matching cost. This novel cost takes both photometric consistency and planar compatibility into consideration, making it suited for the depth estimation of both non-planar and planar regions. Experimental results demonstrate that our method can efficiently recover the depth information of extremely low-textured areas, thus obtaining high complete 3D models and achieving state-of-the-art performance.


2016 ◽  
Vol 13 (1) ◽  
pp. 52-81 ◽  
Author(s):  
Richard Röttger

SummaryClustering is a long-standing problem in computer science and is applied in virtually any scientific field for exploring the inherent structure of datasets. In biomedical research, clustering tools have been utilized in manifold areas, among many others in expression analysis, disease subtyping or protein research. A plethora of different approaches have been developed but there is only little guideline what approach is the optimal in what particular situation. Furthermore, a typical cluster analysis is an entire process with several highly interconnected steps; from preprocessing, proximity calculation, the actual clustering to evaluation and optimization. Only when all steps seamlessly work together, an optimal result can be achieved. This renders a cluster analyses tiresome and error-prone especially for non-experts. A mere trial-and-error approach renders increasingly infeasible when considering the tremendous growth of available datasets; thus, a strategic and thoughtful course of action is crucial for a cluster analysis. This manuscript provides an overview of the crucial steps and the most common techniques involved in conducting a state-of-the-art cluster analysis of biomedical datasets.


2021 ◽  
Author(s):  
Qingxi Meng ◽  
Shubham Chandak ◽  
Yifan Zhu ◽  
Tsachy Weissman

Motivation: The amount of data produced by genome sequencing experiments has been growing rapidly over the past several years, making compression important for efficient storage, transfer and analysis of the data. In recent years, nanopore sequencing technologies have seen increasing adoption since they are portable, real-time and provide long reads. However, there has been limited progress on compression of nanopore sequencing reads obtained in FASTQ files. Previous work ENANO focuses mostly on quality score compression and does not achieve significant gains for the compression of read sequences over general-purpose compressors. RENANO achieves significantly better compression for read sequences but is limited to aligned data with a reference available. Results: We present NanoSpring, a reference-free compressor for nanopore sequencing reads, relying on an approximate assembly approach. We evaluate NanoSpring on a variety of datasets including bacterial, metagenomic, plant, animal, and human whole genome data. For recently basecalled high quality nanopore datasets, NanoSpring achieves close to 3x improvement in compression over state-of-the-art reference-free compressors. The computational requirements of NanoSpring are practical, although it uses more time and memory during compression than previous tools to achieve the compression gains. Availability: NanoSpring is available on GitHub at https://github.com/qm2/NanoSpring.


Sign in / Sign up

Export Citation Format

Share Document