scholarly journals SOLVING MEDICAL AKZO NOBEL PROBLEM USING FUNCTIONAL LOAD BALANCING ALGORITHM OF 4(3) DIRK METHOD

2012 ◽  
Vol 09 ◽  
pp. 480-487 ◽  
Author(s):  
UMMUL KHAIR SALMA DIN ◽  
FUDZIAH ISMAIL ◽  
ZANARIAH ABDUL MAJID ◽  
ROKIAH ROZITA AHMAD

Medical Akzo Nobel problem (MEDAKZO) is known for its tenancy of incurring high computational cost. Originates from the penetration of radio-labeled antibodies into a tissue that has been infected by a tumor, the problem has been derived from a one dimensional partial differential equations to a two dimensional ordinary differential equations thus generates a large scale of problem to be solved. This paper presents the performance of a new 4(3) diagonally implicit Runge-Kutta (DIRK) method which is suitable to excellently solve MEDAKZO problem that is stiff in nature. The sparsity pattern designed on the method enable the functions evaluations to be computed simultaneously on two processors. The functional load balancing can be profitable especially in solving large problems.

2006 ◽  
Vol 04 (03) ◽  
pp. 639-647 ◽  
Author(s):  
ELEAZAR ESKIN ◽  
RODED SHARAN ◽  
ERAN HALPERIN

The common approaches for haplotype inference from genotype data are targeted toward phasing short genomic regions. Longer regions are often tackled in a heuristic manner, due to the high computational cost. Here, we describe a novel approach for phasing genotypes over long regions, which is based on combining information from local predictions on short, overlapping regions. The phasing is done in a way, which maximizes a natural maximum likelihood criterion. Among other things, this criterion takes into account the physical length between neighboring single nucleotide polymorphisms. The approach is very efficient and is applied to several large scale datasets and is shown to be successful in two recent benchmarking studies (Zaitlen et al., in press; Marchini et al., in preparation). Our method is publicly available via a webserver at .


2019 ◽  
Vol 34 (1) ◽  
pp. 101-123 ◽  
Author(s):  
Taito Lee ◽  
Shin Matsushima ◽  
Kenji Yamanishi

Abstract We consider the class of linear predictors over all logical conjunctions of binary attributes, which we refer to as the class of combinatorial binary models (CBMs) in this paper. CBMs are of high knowledge interpretability but naïve learning of them from labeled data requires exponentially high computational cost with respect to the length of the conjunctions. On the other hand, in the case of large-scale datasets, long conjunctions are effective for learning predictors. To overcome this computational difficulty, we propose an algorithm, GRAfting for Binary datasets (GRAB), which efficiently learns CBMs within the $$L_1$$L1-regularized loss minimization framework. The key idea of GRAB is to adopt weighted frequent itemset mining for the most time-consuming step in the grafting algorithm, which is designed to solve large-scale $$L_1$$L1-RERM problems by an iterative approach. Furthermore, we experimentally showed that linear predictors of CBMs are effective in terms of prediction accuracy and knowledge discovery.


F1000Research ◽  
2017 ◽  
Vol 5 ◽  
pp. 1987 ◽  
Author(s):  
Jasper J. Koehorst ◽  
Edoardo Saccenti ◽  
Peter J. Schaap ◽  
Vitor A. P. Martins dos Santos ◽  
Maria Suarez-Diez

A functional comparative genome analysis is essential to understand the mechanisms underlying bacterial evolution and adaptation. Detection of functional orthologs using standard global sequence similarity methods faces several problems; the need for defining arbitrary acceptance thresholds for similarity and alignment length, lateral gene acquisition and the high computational cost for finding bi-directional best matches at a large scale. We investigated the use of protein domain architectures for large scale functional comparative analysis as an alternative method. The performance of both approaches was assessed through functional comparison of 446 bacterial genomes sampled at different taxonomic levels. We show that protein domain architectures provide a fast and efficient alternative to methods based on sequence similarity to identify groups of functionally equivalent proteins within and across taxonomic boundaries, and it is suitable for large scale comparative analysis. Running both methods in parallel pinpoints potential functional adaptations that may add to bacterial fitness.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 1987 ◽  
Author(s):  
Jasper J. Koehorst ◽  
Edoardo Saccenti ◽  
Peter J. Schaap ◽  
Vitor A. P. Martins dos Santos ◽  
Maria Suarez-Diez

A functional comparative genome analysis is essential to understand the mechanisms underlying bacterial evolution and adaptation. Detection of functional orthologs using standard global sequence similarity methods faces several problems; the need for defining arbitrary acceptance thresholds for similarity and alignment length, lateral gene acquisition and the high computational cost for finding bi-directional best matches at a large scale. We investigated the use of protein domain architectures for large scale functional comparative analysis as an alternative method. The performance of both approaches was assessed through functional comparison of 446 bacterial genomes sampled at different taxonomic levels. We show that protein domain architectures provide a fast and efficient alternative to methods based on sequence similarity to identify groups of functionally equivalent proteins within and across taxonomic bounderies. As the computational cost scales linearly, and not quadratically with the number of genomes, it is suitable for large scale comparative analysis. Running both methods in parallel pinpoints potential functional adaptations that may add to bacterial fitness.


2020 ◽  
Vol 9 (11) ◽  
pp. 656
Author(s):  
Muhammad Hamid Chaudhry ◽  
Anuar Ahmad ◽  
Qudsia Gulzar

Unmanned Aerial Vehicles (UAVs) as a surveying tool are mainly characterized by a large amount of data and high computational cost. This research investigates the use of a small amount of data with less computational cost for more accurate three-dimensional (3D) photogrammetric products by manipulating UAV surveying parameters such as flight lines pattern and image overlap percentages. Sixteen photogrammetric projects with perpendicular flight plans and a variation of 55% to 85% side and forward overlap were processed in Pix4DMapper. For UAV data georeferencing and accuracy assessment, 10 Ground Control Points (GCPs) and 18 Check Points (CPs) were used. Comparative analysis was done by incorporating the median of tie points, the number of 3D point cloud, horizontal/vertical Root Mean Square Error (RMSE), and large-scale topographic variations. The results show that an increased forward overlap also increases the median of the tie points, and an increase in both side and forward overlap results in the increased number of point clouds. The horizontal accuracy of 16 projects varies from ±0.13m to ±0.17m whereas the vertical accuracy varies from ± 0.09 m to ± 0.32 m. However, the lowest vertical RMSE value was not for highest overlap percentage. The tradeoff among UAV surveying parameters can result in high accuracy products with less computational cost.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Weihua Lv ◽  
Huapu Sun ◽  
Xiaofei Zhang ◽  
Dazhuan Xu

The problem of the direction of arrival (DOA) estimation for the noncircular (NC) signals, which have been widely used in communications, is investigated. A reduced-dimension NC-Capon algorithm is proposed hereby for the DOA estimation of noncircular signals. The proposed algorithm, which only requires one-dimensional search, can avoid the high computational cost within the two-dimensional NC-Capon algorithm. The angle estimation performance of the proposed algorithm is much better than that of the conventional Capon algorithm and very close to that of the two-dimensional NC-Capon algorithm, which has a much higher complexity than the proposed algorithm. Furthermore, the proposed algorithm can be applied to arbitrary arrays and works well without estimating the noncircular phases. The simulation results verify the effectiveness and improvement of the proposed algorithm.


2013 ◽  
Vol 22 (04) ◽  
pp. 1341001 ◽  
Author(s):  
QI YU

Clustering techniques offer a systematic approach to organize the diverse and fast increasing Web services by assigning relevant services into homogeneous service communities. However, the ever increasing number of Web services poses key challenges for building large-scale service communities. In this paper, we tackle the scalability issue in service clustering, aiming to accurately and efficiently discover service communities over very large-scale services. A key observation is that service descriptions are usually represented by long but very sparse term vectors as each service is only described by a limited number of terms. This inspires us to seek a new service representation that is economical to store, efficient to process, and intuitive to interpret. This new representation enables service clustering to scale to massive number of services. More specifically, a set of anchor services are identified that allows each service to represent as a linear combination of a small number of anchor services. In this way, the large number of services are encoded with a much more compact anchor service space. Despite service clustering can be performed much more efficiently in the compact anchor service space, discovery of anchor services from large-scale service descriptions may incur high computational cost. We develop principled optimization strategies for efficient anchor service discovery. Extensive experiments are conducted on real-world service data to assess both the effectiveness and efficiency of the proposed approach. Results on a dataset with over 3,700 Web services clearly demonstrate the good scalability of sparse functional representation and the efficiency of the optimization algorithms for anchor service discovery.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 1987 ◽  
Author(s):  
Jasper J. Koehorst ◽  
Edoardo Saccenti ◽  
Peter J. Schaap ◽  
Vitor A. P. Martins dos Santos ◽  
Maria Suarez-Diez

A functional comparative genome analysis is essential to understand the mechanisms underlying bacterial evolution and adaptation. Detection of functional orthologs using standard global sequence similarity methods faces several problems; the need for defining arbitrary acceptance thresholds for similarity and alignment length, lateral gene acquisition and the high computational cost for finding bi-directional best matches at a large scale. We investigated the use of protein domain architectures for large scale functional comparative analysis as an alternative method. The performance of both approaches was assessed through functional comparison of 446 bacterial genomes sampled at different taxonomic levels. We show that protein domain architectures provide a fast and efficient alternative to methods based on sequence similarity to identify groups of functionally equivalent proteins within and across taxonomic boundaries, and it is suitable for large scale comparative analysis. Running both methods in parallel pinpoints potential functional adaptations that may add to bacterial fitness.


Sign in / Sign up

Export Citation Format

Share Document