parallel implementation
Recently Published Documents


TOTAL DOCUMENTS

2711
(FIVE YEARS 402)

H-INDEX

47
(FIVE YEARS 8)

PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261771
Author(s):  
Dan Zhu ◽  
Yaoyao Wei ◽  
Hainan Huang ◽  
Tian Xie

The outbreak of unconventional emergencies leads to a surge in demand for emergency supplies. How to effectively arrange emergency production processes and improve production efficiency is significant. The emergency manufacturing systems are typically complex systems, which are difficult to be analyzed by using physical experiments. Based on the theory of Random Service System (RSS) and Parallel Emergency Management System (PeMS), a parallel simulation and optimization framework of production processes for surging demand of emergency supplies is constructed. Under this novel framework, an artificial system model paralleling with the real scenarios is established and optimized by the parallel implementation processes. Furthermore, a concrete example of mask shortage, which occurred at Huoshenshan Hospital in the COVID-19 pandemic, verifies the feasibility of this method.


2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Jing Lin ◽  
Laurent L. Njilla ◽  
Kaiqi Xiong

AbstractDeep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance. However, recent studies on adversarial examples, which have maliciously undetectable perturbations added to their original samples that are indistinguishable by human eyes but mislead the machine learning approaches, show that machine learning models are vulnerable to security attacks. Though various adversarial retraining techniques have been developed in the past few years, none of them is scalable. In this paper, we propose a new iterative adversarial retraining approach to robustify the model and to reduce the effectiveness of adversarial inputs on DNN models. The proposed method retrains the model with both Gaussian noise augmentation and adversarial generation techniques for better generalization. Furthermore, the ensemble model is utilized during the testing phase in order to increase the robust test accuracy. The results from our extensive experiments demonstrate that the proposed approach increases the robustness of the DNN model against various adversarial attacks, specifically, fast gradient sign attack, Carlini and Wagner (C&W) attack, Projected Gradient Descent (PGD) attack, and DeepFool attack. To be precise, the robust classifier obtained by our proposed approach can maintain a performance accuracy of 99% on average on the standard test set. Moreover, we empirically evaluate the runtime of two of the most effective adversarial attacks, i.e., C&W attack and BIM attack, to find that the C&W attack can utilize GPU for faster adversarial example generation than the BIM attack can. For this reason, we further develop a parallel implementation of the proposed approach. This parallel implementation makes the proposed approach scalable for large datasets and complex models.


2022 ◽  
Vol 15 (1) ◽  
pp. 63
Author(s):  
Natarajan Arul Murugan ◽  
Artur Podobas ◽  
Davide Gadioli ◽  
Emanuele Vitali ◽  
Gianluca Palermo ◽  
...  

Drug discovery is the most expensive, time-demanding, and challenging project in biopharmaceutical companies which aims at the identification and optimization of lead compounds from large-sized chemical libraries. The lead compounds should have high-affinity binding and specificity for a target associated with a disease, and, in addition, they should have favorable pharmacodynamic and pharmacokinetic properties (grouped as ADMET properties). Overall, drug discovery is a multivariable optimization and can be carried out in supercomputers using a reliable scoring function which is a measure of binding affinity or inhibition potential of the drug-like compound. The major problem is that the number of compounds in the chemical spaces is huge, making the computational drug discovery very demanding. However, it is cheaper and less time-consuming when compared to experimental high-throughput screening. As the problem is to find the most stable (global) minima for numerous protein–ligand complexes (on the order of 106 to 1012), the parallel implementation of in silico virtual screening can be exploited to ensure drug discovery in affordable time. In this review, we discuss such implementations of parallelization algorithms in virtual screening programs. The nature of different scoring functions and search algorithms are discussed, together with a performance analysis of several docking softwares ported on high-performance computing architectures.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012028
Author(s):  
Karamjeet Kaur ◽  
Sudeshna Chakraborty ◽  
Manoj Kumar Gupta

Abstract In bioinformatics, sequence alignment is very important task to compare and find similarity between biological sequences. Smith Waterman algorithm is most widely used for alignment process but it has quadratic time complexity. This algorithm is using sequential approach so if the no. of biological sequences is increasing then it takes too much time to align sequences. In this paper, parallel approach of Smith Waterman algorithm is proposed and implemented according to the architecture of graphic processing unit using CUDA in which features of GPU is combined with CPU in such a way that alignment process is three times faster than sequential implementation of Smith Waterman algorithm and helps in accelerating the performance of sequence alignment using GPU. This paper describes the parallel implementation of sequence alignment using GPU and this intra-task parallelization strategy reduces the execution time. The results show significant runtime savings on GPU.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0262056
Author(s):  
Meghana Venkata Palukuri ◽  
Edward M. Marcotte

Characterization of protein complexes, i.e. sets of proteins assembling into a single larger physical entity, is important, as such assemblies play many essential roles in cells such as gene regulation. From networks of protein-protein interactions, potential protein complexes can be identified computationally through the application of community detection methods, which flag groups of entities interacting with each other in certain patterns. Most community detection algorithms tend to be unsupervised and assume that communities are dense network subgraphs, which is not always true, as protein complexes can exhibit diverse network topologies. The few existing supervised machine learning methods are serial and can potentially be improved in terms of accuracy and scalability by using better-suited machine learning models and parallel algorithms. Here, we present Super.Complex, a distributed, supervised AutoML-based pipeline for overlapping community detection in weighted networks. We also propose three new evaluation measures for the outstanding issue of comparing sets of learned and known communities satisfactorily. Super.Complex learns a community fitness function from known communities using an AutoML method and applies this fitness function to detect new communities. A heuristic local search algorithm finds maximally scoring communities, and a parallel implementation can be run on a computer cluster for scaling to large networks. On a yeast protein-interaction network, Super.Complex outperforms 6 other supervised and 4 unsupervised methods. Application of Super.Complex to a human protein-interaction network with ~8k nodes and ~60k edges yields 1,028 protein complexes, with 234 complexes linked to SARS-CoV-2, the COVID-19 virus, with 111 uncharacterized proteins present in 103 learned complexes. Super.Complex is generalizable with the ability to improve results by incorporating domain-specific features. Learned community characteristics can also be transferred from existing applications to detect communities in a new application with no known communities. Code and interactive visualizations of learned human protein complexes are freely available at: https://sites.google.com/view/supercomplex/super-complex-v3-0.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Biqiu Li ◽  
Jiabin Wang ◽  
Xueli Liu

Data is an important source of knowledge discovery, but the existence of similar duplicate data not only increases the redundancy of the database but also affects the subsequent data mining work. Cleaning similar duplicate data is helpful to improve work efficiency. Based on the complexity of the Chinese language and the bottleneck of the single machine system to large-scale data computing performance, this paper proposes a Chinese data cleaning method that combines the BERT model and a k-means clustering algorithm and gives a parallel implementation scheme of the algorithm. In the process of text to vector, the position vector is introduced to obtain the context features of words, and the vector is dynamically adjusted according to the semantics so that the polysemous words can obtain different vector representations in different contexts. At the same time, the parallel implementation of the process is designed based on Hadoop. After that, k-means clustering algorithm is used to cluster similar duplicate data to achieve the purpose of cleaning. Experimental results on a variety of data sets show that the parallel cleaning algorithm proposed in this paper not only has good speedup and scalability but also improves the precision and recall of similar duplicate data cleaning, which will be of great significance for subsequent data mining.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3278
Author(s):  
Petr Pařík ◽  
Jin-Gyun Kim ◽  
Martin Isoz ◽  
Chang-uk Ahn

The enhanced Craig–Bampton (ECB) method is a novel extension of the original Craig–Bampton (CB) method, which has been widely used for component mode synthesis (CMS). The ECB method, using residual modal compensation that is neglected in the CB method, provides dramatic accuracy improvement of reduced matrices without an increasing number of eigenbasis. However, it also needs additional computational requirements to treat the residual flexibility. In this paper, an efficient parallelization of the ECB method is presented to handle this issue and accelerate the applicability for large-scale structural vibration problems. A new ECB formulation within a substructuring strategy is derived to achieve better scalability. The parallel implementation is based on OpenMP parallel architecture. METIS graph partitioning and Linear Algebra Package (LAPACK) are used to automated algebraic partitioning and computational linear algebra, respectively. Numerical examples are presented to evaluate the accuracy, scalability, and capability of the proposed parallel ECB method. Consequently, based on this work, one can expect effective computation of the ECB method as well as accuracy improvement.


Sign in / Sign up

Export Citation Format

Share Document