scholarly journals Comparison of Serial and Parallel Computation on Predicting Missing Data with EM Algorithm

2021 ◽  
Vol 18 (1) ◽  
pp. 22-30
Author(s):  
Erna Nurmawati ◽  
Robby Hasan Pangaribuan ◽  
Ibnu Santoso

One way to deal with the presence of missing value or incomplete data is to impute the data using EM Algorithm. The need for large and fast data processing is necessary to implement parallel computing on EM algorithm serial program. In the parallel program architecture of EM Algorithm in this study, the controller is only related to the EM module whereas the EM module itself uses matrix and vector modules intensively. Parallelization is done by using OpenMP in EM modules which results in faster compute time on parallel programs than serial programs. Parallel computing with a thread of 4 (four) increases speed up, reduces compute time, and reduces efficiency when compared to parallel computing by the number of threads 2 (two).

2015 ◽  
Vol 4 (2) ◽  
pp. 74
Author(s):  
MADE SUSILAWATI ◽  
KARTIKA SARI

Missing data often occur in agriculture and animal husbandry experiment. The missing data in experimental design makes the information that we get less complete. In this research, the missing data was estimated with Yates method and Expectation Maximization (EM) algorithm. The basic concept of the Yates method is to minimize sum square error (JKG), meanwhile the basic concept of the EM algorithm is to maximize the likelihood function. This research applied Balanced Lattice Design with 9 treatments, 4 replications and 3 group of each repetition. Missing data estimation results showed that the Yates method was better used for two of missing data in the position on a treatment, a column and random, meanwhile the EM algorithm was better used to estimate one of missing data and two of missing data in the position of a group and a replication. The comparison of the result JKG of ANOVA showed that JKG of incomplete data larger than JKG of incomplete data that has been added with estimator of data. This suggest  thatwe need to estimate the missing data.


2003 ◽  
Vol 13 (03) ◽  
pp. 473-484 ◽  
Author(s):  
KONRAD HINSEN

One of the main obstacles to a more widespread use of parallel computing in computational science is the difficulty of implementing, testing, and maintaining parallel programs. The combination of a simple parallel computation model, BSP, and a high-level programming language, Python, simplifies these tasks significantly. It allows the rapid development facilities of Python to be applied to parallel programs, providing interactive development as well as interactive debugging of parallel programs.


2014 ◽  
Vol 1049-1050 ◽  
pp. 1320-1326
Author(s):  
Xiao Jin ◽  
Xing Jin Zhang ◽  
Zhi Yun Zheng ◽  
Quan Min Li ◽  
Li Ping Lu

This paper proposes a novel parallel computing method of semantic similarity in linked data to solve such problems as low efficiency and data dispersion.It combines the existing similarity calculation method with MapReduce parallel computation framework to design the appropriate parallel computing method of similarity. First, three typical similarity computing methods and the parallel programming models are introduced. Then according to the MapReduce programming techniques of cloud computing, the parallel computation of similarity in linked data is proposed. The experimental results show that, compared with the traditional platforms, the parallel computing method of similarity on the Hadoop cluster not only improves the capacity and efficiency in the processing massive data, but also has a better speed-up ratio and augmentability.


2018 ◽  
Vol 17 (3) ◽  
pp. 439
Author(s):  
I Putu Adi Pradnyana Wibawa ◽  
IA Dwi Giriantari ◽  
Made Sudarma

The growth of technology will impact to the growth of data beyond limits of database management tools. One of the system is  Information Management System for Hospital it’s a high complexity problem solving method related to the load of data. Parallel computing is one of the technique that been used in HPC. The focus of this research will be emphasized to the design of parallel computing using message – passing model as a search system process in Information Management System for Hospital to find patient data. The design of parallel computing will be done in a way to shared computing data to a number of CPU (Master and Slave), the parallel computing configuration used on CPU (Master and Slave) will be using some stage FOSTER method that are partition, communication, aglomerasi and mapping. The test will be done computing data processing time between sequential and parallel time. Parallel computing design will be tested using speedup and efficiency calculation. The result of designing and testing parallel computing using message-passing model proved the patient data processing speed using parallel program is capable to overcome 1 CPU using sequential network topologi. On speed up method , the test indicate an increase on data transfer speed up using 3 parallel computing CPU. While using efficiency testing method, the efficiency point reached its peak when using 2 and 3 CPU. The Occurrence of decrease on speedup and efficiency point were caused by the minimal amount of data if handled by 7 parallel computing CPU. The Conclusion for this method is, the increase amount of CPU involved in the data processing using parallel computing is not proportional to the amount of time to processing data itself. It happened because every data processing task in terms of the amount of data that’s handled have an ideal amount of CPU limits to do the task itself.


METRON ◽  
2021 ◽  
Author(s):  
Paolo Mariani ◽  
Andrea Marletta

AbstractSocial media has become a widespread element of people’s everyday life, which is used to communicate and generate contents. Among the several ways to express a reaction to social media contents, the “Likes” are critical. Indeed, they convey preferences, which drive existing markets or allow the creation of new ones. Nevertheless, the appreciation indicators have some complex features, as for example the interpretation of the absence of “Likes”. In this case, the lack of approval may be considered as a specific behaviour. The present study aimed to define whether the absence of Likes may indicate the presence of a specific behaviour through the contextualization of the treatment of missing data applied to real cases. We provided a practical strategy for extracting more knowledge from social media data, whose synthesis raises several measurement problems. We proposed an approach based on the disambiguation of missing data in two modalities: “Dislike” and “Nothing”. Finally, a data pre-processing technique was suggested to increase the signal of social media data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Nishith Kumar ◽  
Md. Aminul Hoque ◽  
Masahiro Sugimoto

AbstractMass spectrometry is a modern and sophisticated high-throughput analytical technique that enables large-scale metabolomic analyses. It yields a high-dimensional large-scale matrix (samples × metabolites) of quantified data that often contain missing cells in the data matrix as well as outliers that originate for several reasons, including technical and biological sources. Although several missing data imputation techniques are described in the literature, all conventional existing techniques only solve the missing value problems. They do not relieve the problems of outliers. Therefore, outliers in the dataset decrease the accuracy of the imputation. We developed a new kernel weight function-based proposed missing data imputation technique that resolves the problems of missing values and outliers. We evaluated the performance of the proposed method and other conventional and recently developed missing imputation techniques using both artificially generated data and experimentally measured data analysis in both the absence and presence of different rates of outliers. Performances based on both artificial data and real metabolomics data indicate the superiority of our proposed kernel weight-based missing data imputation technique to the existing alternatives. For user convenience, an R package of the proposed kernel weight-based missing value imputation technique was developed, which is available at https://github.com/NishithPaul/tWLSA.


Author(s):  
Yasuhito Takahashi ◽  
Koji Fujiwara ◽  
Takeshi Iwashita ◽  
Hiroshi Nakashima

Purpose This paper aims to propose a parallel-in-space-time finite-element method (FEM) for transient motor starting analyses. Although the domain decomposition method (DDM) is suitable for solving large-scale problems and the parallel-in-time (PinT) integration method such as Parareal and time domain parallel FEM (TDPFEM) is effective for problems with a large number of time steps, their parallel performances get saturated as the number of processes increases. To overcome the difficulty, the hybrid approach in which both the DDM and PinT integration methods are used is investigated in a highly parallel computing environment. Design/methodology/approach First, the parallel performances of the DDM, Parareal and TDPFEM were compared because the scalability of these methods in highly parallel computation has not been deeply discussed. Then, the combination of the DDM and Parareal was investigated as a parallel-in-space-time FEM. The effectiveness of the developed method was demonstrated in transient starting analyses of induction motors. Findings The combination of Parareal with the DDM can improve the parallel performance in the case where the parallel performance of the DDM, TDPFEM or Parareal is saturated in highly parallel computation. In the case where the number of unknowns is large and the number of available processes is limited, the use of DDM is the most effective from the standpoint of computational cost. Originality/value This paper newly develops the parallel-in-space-time FEM and demonstrates its effectiveness in nonlinear magnetoquasistatic field analyses of electric machines. This finding is significantly important because a new direction of parallel computing techniques and great potential for its further development are clarified.


2013 ◽  
Vol 411-414 ◽  
pp. 585-588
Author(s):  
Liu Yang ◽  
Tie Ying Liu

This paper introduces parallel feature of the GPU, which will help GPU parallel computation methods to achieve the parallelization of PSO parallel path search process; and reduce the increasingly high problem of PSO (PSO: Particle Swarm Optimization) in time and space complexity. The experimental results show: comparing with CPU mode, GPU platform calculation improves the search rate and shortens the calculation time.


2013 ◽  
Vol 284-287 ◽  
pp. 3428-3432 ◽  
Author(s):  
Yu Hsiu Huang ◽  
Richard Chun Hung Lin ◽  
Ying Chih Lin ◽  
Cheng Yi Lin

Most applications of traditional full-text search, e.g., webpage search, are offline which exploit text search engine to preview the texts and set up related index. However, applications of online realtime full-text search, e.g., network Intrusion detection and prevention systems (IDPS) are too hard to implementation by using commodity hardware. They are expensive and inflexible for more and more occurrences of new virus patterns and the text cannot be previewed and the search must be complete realtime online. Additionally, IDPS needs multi-pattern matching, and then malicious packets can be removed immediately from normal ones without degrading the network performance. Considering the problem of realtime multi-pattern matching, we implement two sequential algorithms, Wu-Manber and Aho-Corasick, respectively over GPU parallel computation platform. Both pattern matching algorithms are quite suitable for the cases with a large amount of patterns. In addition, they are also easier extendable over GPU parallel computation platform to satisfy realtime requirement. Our experimental results show that the throughput of GPU implementation is about five to seven times faster than CPU. Therefore, pattern matching over GPU offers an attractive solution of IDPS to speed up malicious packets detection among the normal traffic by considering the lower cost, easy expansion and better performance.


Author(s):  
Caio Ribeiro ◽  
Alex A. Freitas

AbstractLongitudinal datasets of human ageing studies usually have a high volume of missing data, and one way to handle missing values in a dataset is to replace them with estimations. However, there are many methods to estimate missing values, and no single method is the best for all datasets. In this article, we propose a data-driven missing value imputation approach that performs a feature-wise selection of the best imputation method, using known information in the dataset to rank the five methods we selected, based on their estimation error rates. We evaluated the proposed approach in two sets of experiments: a classifier-independent scenario, where we compared the applicabilities and error rates of each imputation method; and a classifier-dependent scenario, where we compared the predictive accuracy of Random Forest classifiers generated with datasets prepared using each imputation method and a baseline approach of doing no imputation (letting the classification algorithm handle the missing values internally). Based on our results from both sets of experiments, we concluded that the proposed data-driven missing value imputation approach generally resulted in models with more accurate estimations for missing data and better performing classifiers, in longitudinal datasets of human ageing. We also observed that imputation methods devised specifically for longitudinal data had very accurate estimations. This reinforces the idea that using the temporal information intrinsic to longitudinal data is a worthwhile endeavour for machine learning applications, and that can be achieved through the proposed data-driven approach.


Sign in / Sign up

Export Citation Format

Share Document