Efficiently Computing Geodesic loop for Interactive Segmentation of 3D Mesh

Author(s):  
Yun Meng ◽  
Shaojun Zhu ◽  
Bangquan Liu ◽  
Dechao Sun ◽  
Li Liu ◽  
...  

Introduction: Shape segmentation is a fundamental problem of computer graphics and geometric modeling. Although the existence segmentation algorithms of shapes have been widely studied in mathematics community, little progress has been made on how to compute them on polygonal surfaces interactively using geodesic loops. Method: We compute the geodesic distance fields with improved Fast March Method (FMM) proposed by Xin and Wang. We propose a new algorithm to compute geodesic loops over a triangulate surface and a new interactive shape segmentation manner on triangulate surface. Result: The average computation time on 50K vertices model is less than 0.08s. Discussion: In the future, we will use an accurate geodesic algorithm and parallel computing techniques to improve our algorithm to obtain better smooth geodesic loop. Conclusion: A large number of experimental results show that the algorithm proposed in this paper can effectively achieve high precision geodesic loop paths, and our method can also be used to interactive shape segmentation in real time.

Author(s):  
Ervina Varijki ◽  
Bambang Krismono Triwijoyo

One type of cancer that is capable identified using MRI technology is breast cancer. Breast cancer is still the leading cause of death world. therefore early detection of this disease is needed. In identifying breast cancer, a doctor or radiologist analyzing the results of magnetic resonance image that is stored in the format of the Digital Imaging Communication In Medicine (DICOM). It takes skill and experience sufficient for diagnosis is appropriate, andaccurate, so it is necessary to create a digital image processing applications by utilizing the process of object segmentation and edge detection to assist the physician or radiologist in identifying breast cancer. MRI image segmentation using edge detection to identification of breast cancer using a method stages gryascale change the image format, then the binary image thresholding and edge detection process using the latest Robert operator. Of the20 tested the input image to produce images with the appearance of the boundary line of each region or object that is visible and there are no edges are cut off, with the average computation time less than one minute.


Author(s):  
FATHALLAH NOUBOUD ◽  
RÉJEAN PLAMONDON

This paper presents a real-time constraint-free handprinted character recognition system based on a structural approach. After the preprocessing operation, a chain code is extracted to represent the character. The classification is based on the use of a processor dedicated to string comparison. The average computation time to recognize a character is about 0.07 seconds. During the learning step, the user can define any set of characters or symbols to be recognized by the system. Thus there are no constraints on the handprinting. The experimental tests show a high degree of accuracy (96%) for writer-dependent applications. Comparisons with other system and methods are discussed. We also present a comparison between the processor used in this system and the Wagner and Fischer algorithm. Finally, we describe some applications of the system.


2007 ◽  
Vol 46 (03) ◽  
pp. 324-331 ◽  
Author(s):  
P. Jäger ◽  
S. Vogel ◽  
A. Knepper ◽  
T. Kraus ◽  
T. Aach ◽  
...  

Summary Objectives: Pleural thickenings as biomarker of exposure to asbestos may evolve into malignant pleural mesothelioma. Foritsearly stage, pleurectomy with perioperative treatment can reduce morbidity and mortality. The diagnosis is based on a visual investigation of CT images, which is a time-consuming and subjective procedure. Our aim is to develop an automatic image processing approach to detect and quantitatively assess pleural thickenings. Methods: We first segment the lung areas, and identify the pleural contours. A convexity model is then used together with a Hounsfield unit threshold to detect pleural thickenings. The assessment of the detected pleural thickenings is based on a spline-based model of the healthy pleura. Results: Tests were carried out on 14 data sets from three patients. In all cases, pleural contours were reliably identified, and pleural thickenings detected. PC-based Computation times were 85 min for a data set of 716 slices, 35 min for 401 slices, and 4 min for 75 slices, resulting in an average computation time of about 5.2 s per slice. Visualizations of pleurae and detected thickeningswere provided. Conclusion: Results obtained so far indicate that our approach is able to assist physicians in the tedious task of finding and quantifying pleural thickenings in CT data. In the next step, our system will undergo an evaluation in a clinical test setting using routine CT data to quantifyits performance.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Author(s):  
T Kavitha ◽  
K. Jayasankar

<p>Compression technique is adopted to solve various big data problems such as storage and transmission. The growth of cloud computing and smart phone industries has led to generation of huge volume of digital data. Digital data can be in various forms as audio, video, images and documents. These digital data are generally compressed and stored in cloud storage environment. Efficient storing and retrieval mechanism of digital data by adopting good compression technique will result in reducing cost. The compression technique is composed of lossy and lossless compression technique. Here we consider Lossless image compression technique, minimizing the number of bits for encoding will aid in improving the coding efficiency and high compression. Fixed length coding cannot assure in minimizing bit length. In order to minimize the bits variable Length codes with prefix-free codes nature are preferred. However the existing compression model presented induce high computing overhead, to address this issue, this work presents an ideal and efficient modified Huffman technique that improves compression factor up to 33.44% for Bi-level images and 32.578% for Half-tone Images. The average computation time both encoding and decoding shows an improvement of 20.73% for Bi-level images and 28.71% for Half-tone images. The proposed work has achieved overall 2% increase in coding efficiency, reduced memory usage of 0.435% for Bi-level images and 0.19% for Half-tone Images. The overall result achieved shows that the proposed model can be adopted to support ubiquitous access to digital data.</p>


2011 ◽  
Vol 368-373 ◽  
pp. 3113-3116
Author(s):  
Liang Zou ◽  
Ling Xiang Zhu

The current public transportation guidance models are static and based on travel times, travel distance and travel costs. However latest survey shows that travel time has become the key factor for passenger travel route selection in big cities. Dynamic public transportation guidance model based on travel time and waiting time was proposed and the effectiveness of this model is proved in this paper. To solve this model efficiently, this paper proposed the application of A* algorithm in dealing with this models using straight line distance between two bus stops in electronic maps as Priori knowledge. Finally, the developed model and algorithm were implemented with 50 random OD pairs based on Guangzhou’s public transportation networks (containing 471 public transportation routes and 1040 stops) and Guangzhou’s electronic map. Their computational performance was analyzed experimentally. The result indicates that the models and algorithm proposed in this paper are very efficient. The average computation time of the algorithm proposed in this paper is 0.154s and the average number of nodes selected of this algorithm is 194.2.


Author(s):  
K. Liu ◽  
J. Boehm

Point cloud segmentation is a fundamental problem in point processing. Segmenting a point cloud fully automatically is very challenging due to the property of point cloud as well as different requirements of distinct users. In this paper, an interactive segmentation method for point clouds is proposed. Only two strokes need to be drawn intuitively to indicate the target object and the background respectively. The draw strokes are sparse and don't necessarily cover the whole object. Given the strokes, a weighted graph is built and the segmentation is formulated as a minimization problem. The problem is solved efficiently by using the Max Flow Min Cut algorithm. In the experiments, the mobile mapping data of a city area is utilized. The resulting segmentations demonstrate the efficiency of the method that can be potentially applied for general point clouds.


Author(s):  
Matthew Piper ◽  
Pranav Bhounsule ◽  
Krystel K. Castillo-Villar

Flappy Bird is a mobile game that involves tapping the screen to navigate a bird through a gap between pairs of vertical pipes. When the bird passes through the gap, the score increments by one and the game ends when the bird hits the floor or a pipe. Surprisingly, Flappy Bird is a very difficult game and scores in single digits are not uncommon even after extensive practice. In this paper, we create three controllers to play the game autonomously. The controllers are: (1) a manually tuned controller that flaps the bird based on a vertical set point condition; (2) an optimization-based controller that plans and executes an optimal path between consecutive tubes; (3) a model-based predictive controller (MPC). Our results showed that on average, the optimization-based controller scored highest, followed closely by the MPC, while the manually tuned controller scored the least. A key insight was that choosing a planning horizon slightly beyond consecutive tubes was critical for achieving high scores. The average computation time per iteration for the MPC was half that of optimization-based controller but the worst case time (maximum time) per iteration for the MPC was thrice that of optimization-based controller. The success of the optimization-based controller was due to the intuitive tuning of the terminal position and velocity constraints while for the MPC the important parameters were the prediction and control horizon. The MPC was straightforward to tune compared to the other two controllers. Our conclusion is that MPC provides the best compromise between performance and computation speed without requiring elaborate tuning.


2018 ◽  
Vol 11 (3) ◽  
pp. 1529-1547 ◽  
Author(s):  
Antti Lipponen ◽  
Tero Mielonen ◽  
Mikko R. A. Pitkänen ◽  
Robert C. Levy ◽  
Virginia R. Sawyer ◽  
...  

Abstract. We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15%) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.


Author(s):  
Jun-Xia Liu ◽  
Zhen-Hong Jia

Telecommunication traffic prediction is an important aspect of data analysis and processing in communication networks. In this study, we utilize the least-squares support vector machine (LSSVM) prediction method to improve the prediction performance of telecommunication traffic. As the parameters of LSSVM are difficult to determine, we propose to optimize the LSSVM parameters using the improved artificial bee colony (IABC) algorithm based on the fitness-prediction strategy (i.e. FP-IABC). We employ real traffic data collected on site to establish a telecommunication traffic forecasting model based on FP-IABC optimizing LSSVM (FP-IABC-LSSVM). The experiment results indicate that in the case involving no increase in the computational complexity, the proposed telecommunication traffic forecasting model-based FP-IABC-LSSVM has a higher prediction accuracy than the prediction model based on the ABC optimizing LSSVM (ABC-LSSVM), particle swarm optimizing LSSVM (PSO-LSSVM), and genetic algorithm optimizing LSSVM (GA-LSSVM). Further, with respect to the standard root mean square error and the average computation time, the proposed FP-IABC-LSSVM is the optimal prediction method of all of the comparison methods. The proposed prediction method not only improves the prediction accuracy, but also reduces the average computation time.


Sign in / Sign up

Export Citation Format

Share Document