scholarly journals Robust surface segmentation and edge feature lines extraction from fractured fragments of relics

2015 ◽  
Vol 2 (2) ◽  
pp. 79-87 ◽  
Author(s):  
Jiangyong Xu ◽  
Mingquan Zhou ◽  
Zhongke Wu ◽  
Wuyang Shui ◽  
Sajid Ali

Abstract Surface segmentation and edge feature lines extraction from fractured fragments of relics are essential steps for computer assisted restoration of fragmented relics. As these fragments were heavily eroded, it is a challenging work to segment surface and extract edge feature lines. This paper presents a novel method to segment surface and extract edge feature lines from triangular meshes of irregular fractured fragments. Firstly, a rough surface segmentation is accomplished by using a clustering algorithm based on the vertex normal vector. Secondly, in order to differentiate between original and fracture faces, a novel integral invariant is introduced to compute the surface roughness. Thirdly, an accurate surface segmentation is implemented by merging faces based on face normal vector and roughness. Finally, edge feature lines are extracted based on the surface segmentation. Some experiments are made and analyzed, and the results show that our method can achieve surface segmentation and edge extraction effectively.

2013 ◽  
Vol 756-759 ◽  
pp. 4026-4030 ◽  
Author(s):  
Jian Bin Lin ◽  
Ming Quan Zhou ◽  
Zhong Ke Wu

This paper presents a novel method to extract edge lines from point clouds of these eroded, rough fractured fragments. Firstly, a principal component analysis based method is used to extract feature points, followed by clustering of these feature points. Secondly, a local feature lines fragment is constructed for each cluster and afterwards a smooth and noise pruning process for each local feature lines fragment. Thirdly, these separated local feature lines fragments are connected and bridged in order to eliminate the gaps caused by the eroded regions and construct completed global feature lines. Fourthly, a noise pruning process is performed. The output of this method is completed, smoothed edge feature lines. We illustrate the performance of our method on a number of real-world examples.


2021 ◽  
Vol 13 (9) ◽  
pp. 4648
Author(s):  
Rana Muhammad Adnan ◽  
Kulwinder Singh Parmar ◽  
Salim Heddam ◽  
Shamsuddin Shahid ◽  
Ozgur Kisi

The accurate estimation of suspended sediments (SSs) carries significance in determining the volume of dam storage, river carrying capacity, pollution susceptibility, soil erosion potential, aquatic ecological impacts, and the design and operation of hydraulic structures. The presented study proposes a new method for accurately estimating daily SSs using antecedent discharge and sediment information. The novel method is developed by hybridizing the multivariate adaptive regression spline (MARS) and the Kmeans clustering algorithm (MARS–KM). The proposed method’s efficacy is established by comparing its performance with the adaptive neuro-fuzzy system (ANFIS), MARS, and M5 tree (M5Tree) models in predicting SSs at two stations situated on the Yangtze River of China, according to the three assessment measurements, RMSE, MAE, and NSE. Two modeling scenarios are employed; data are divided into 50–50% for model training and testing in the first scenario, and the training and test data sets are swapped in the second scenario. In Guangyuan Station, the MARS–KM showed a performance improvement compared to ANFIS, MARS, and M5Tree methods in term of RMSE by 39%, 30%, and 18% in the first scenario and by 24%, 22%, and 8% in the second scenario, respectively, while the improvement in RMSE of ANFIS, MARS, and M5Tree was 34%, 26%, and 27% in the first scenario and 7%, 16%, and 6% in the second scenario, respectively, at Beibei Station. Additionally, the MARS–KM models provided much more satisfactory estimates using only discharge values as inputs.


Author(s):  
Shigang Wang ◽  
Shuai Peng ◽  
Jiawen He

Due to the point cloud of oral scan denture has a large amount of data and redundant points. A point cloud simplification algorithm based on feature preserving is proposed to solve the problem that the feature preserving is incomplete when processing point cloud data and cavities occur in relatively flat regions. Firstly, the algorithm uses kd-tree to construct the point cloud spatial topological to search the k-Neighborhood of the sampling point. On the basis of that to calculate the curvature of each point, the angle between the normal vector, the distance from the point to the neighborhood centroid, as well as the standard deviation and the average distance from the point to the neighborhood on this basis, therefore, the detailed features of point cloud can be extracted by multi-feature extraction and threshold determination. For the non-characteristic region, the non-characteristic point cloud is spatially divided through Octree to obtain the K-value of K-means clustering algorithm and the initial clustering center point. The simplified results of non-characteristic regions are obtained after further subdivision. Finally, the extracted detail features and the reduced result of non-featured region will be merged to obtain the final simplification result. The experimental results show that the algorithm can retain the characteristic information of point cloud model better, and effectively avoid the phenomenon of holes in the simplification process. The simplified results have better smoothness, simplicity and precision, and are of high practical value.


2013 ◽  
Vol 11 (01) ◽  
pp. 1340012 ◽  
Author(s):  
SEYED SHAHRIAR ARAB ◽  
MOHAMMADBAGHER PARSA GHARAMALEKI ◽  
ZAIDDODINE PASHANDI ◽  
REZVAN MOBASSERI

Computer assisted assignment of protein domains is considered as an important issue in structural bioinformatics. The exponential increase in the number of known three dimensional protein structures and the significant role of proteins in biology, medicine and pharmacology illustrate the necessity of a reliable method to automatically detect structural domains as protein units. For this aim, we have developed a program based on the accessible surface area (ASA) and the hydrogen bonds energy in protein backbone (HBE). PUTracer (Protein Unit Tracer) is built on the features of a fast top-down approach to cut a chain into its domains (contiguous domains) with minimal change in ASA as well as HBE. Performance of the program was assessed by a comprehensive benchmark dataset of 124 protein chains, which is based on agreement among experts (e.g. CATH, SCOP) and was expanded to include structures with different types of domain combinations. Equal number of domains and at least 90% agreement in critical boundary accuracy were considered as correct assignment conditions. PUTracer assigned domains correctly in 81.45% of protein chains. Although low critical boundary accuracy in 18.55% of protein chains leads to the incorrect assignments, adjusting the scales causes to improve the performance up to 89.5%. We discuss here the success or failure of adjusting the scales with provided evidences. Availability: PUTracer is available at http://bioinf.modares.ac.ir/software/PUTracer/


Author(s):  
A.S. Li ◽  
A.J.C. Trappey ◽  
C.V. Trappey

A registered trademark distinctively identifies a company, its products or services. A trademark (TM) is a type of intellectual property (IP) which is protected by the laws in the country where the trademark is officially registered. TM owners may take legal action when their IP rights are infringed upon. TM legal cases have grown in pace with the increasing number of TMs registered globally. In this paper, an intelligent recommender system automatically identifies similar TM case precedents for any given target case to support IP legal research. This study constructs the semantic network representing the TM legal scope and terminologies. A system is built to identify similar cases based on the machine-readable, frame-based knowledge representations of the judgments/documents. In this research, 4,835 US TM legal cases litigated in the US district and federal courts are collected as the experimental dataset. The computer-assisted system is constructed to extract critical features based on the ontology schema. The recommender will identify similar prior cases according to the values of their features embedded in these legal documents which include the case facts, issues under disputes, judgment holdings, and applicable rules and laws. Term frequency-inverse document frequency is used for text mining to discover the critical features of the litigated cases. Soft clustering algorithm, e.g., Latent Dirichlet Allocation, is applied to generate topics and the cases belonging to these topics. Thus, similar cases under each topic are identified for references. Through the analysis of the similarity between the cases based on the TM legal semantic analysis, the intelligent recommender provides precedents to support TM legal action and strategic planning.


Author(s):  
Zhongjie Long ◽  
◽  
Kouki Nagamune ◽  
Ryosuke Kuroda ◽  
Masahiro Kurosaka ◽  
...  

Three-dimensional (3D) navigation using a computer-assisted technique is being increasingly performed in minimally invasive surgical procedures because it can provide stereoscopic information regarding the operating field to the surgeon. In this paper, the development of a real-time arthroscopic system utilizing an endoscopic camera and optical fiber to navigate a normal vector for a reconstructed knee joint surface is described. A specific navigation approach suitable for use in a rendered surface was presented in extenso. A small-sized endoscopic tube was utilized arthroscopically on a cadaveric knee joint to show the potential application of the developed system. Experimental results of underwater navigation on a synthetic knee joint showed that our system allows for a higher accuracy than a freehand technique. The mean angle of navigation for the proposed technique is 9.5circ (range, 5circ to 17circ; SD, 2.86circ) versus 14.8circ (range, 6circ to 26circ; SD, 7.53circ) and 12.6circ (range, 4circ to 17circ; SD, 3.98circ) for two sites using a freehand technique.


PLoS ONE ◽  
2016 ◽  
Vol 11 (1) ◽  
pp. e0146352 ◽  
Author(s):  
Daniel M. de Brito ◽  
Vinicius Maracaja-Coutinho ◽  
Savio T. de Farias ◽  
Leonardo V. Batista ◽  
Thaís G. do Rêgo

2007 ◽  
Vol 19 (06) ◽  
pp. 395-407
Author(s):  
K. Bommanna Raja ◽  
M. Madheswaran ◽  
K. Thyagarajah

A study on ultrasound kidney images using proposed dominant Gabor wavelet is made for the automated diagnosis and classification of few important kidney categories namely normal, medical renal diseases and cortical cyst. The acquired images are initially preprocessed to retain the pixels of kidney region. Out of 30 Gabor wavelets, a unique dominant Gabor wavelet is determined by estimating the similarity metrics between original and reconstructed Gabor image. The Gabor features are then evaluated for each image. These derived features are mapped onto 2D feature space using k-mean clustering algorithm to group the data of similar class. The decision boundaries are formulated using linear discriminant function between the data sets of three kidney categories. A k-NN classifier module is used to identify the query input US kidney image category. The results show that the proposed dominant Gabor wavelet provides the classification efficiency of 87.33% for NR, 76.66% for MRD and 83.33% for CC. The overall classification efficiency improves by 18.89% compared to the classifier trained with features obtained by considering all the Gabor wavelets. The outputs of the proposed decision support systems are validated with medical expert to measure the actual efficiency. Also the overall discriminating ability of the systems is accessed with performance evaluation measure – f-score. It has been observed that the dominant Gabor wavelet improves the classification efficiency appreciably. Hence, the proposed method enhances the objective classification and explores the possibility of implementing a computer-aided diagnosis system exclusively for ultrasound kidney images.


Sign in / Sign up

Export Citation Format

Share Document