scholarly journals FACC: A Novel Finite Automaton Based on Cloud Computing for the Multiple Longest Common Subsequences Search

2012 ◽  
Vol 2012 ◽  
pp. 1-17 ◽  
Author(s):  
Yanni Li ◽  
Yuping Wang ◽  
Liang Bao

Searching for the multiple longest common subsequences (MLCS) has significant applications in the areas of bioinformatics, information processing, and data mining, and so forth, Although a few parallel MLCS algorithms have been proposed, the efficiency and effectiveness of the algorithms are not satisfactory with the increasing complexity and size of biologic data. To overcome the shortcomings of the existing MLCS algorithms, and considering that MapReduce parallel framework of cloud computing being a promising technology for cost-effective high performance parallel computing, a novel finite automaton (FA) based on cloud computing called FACC is proposed under MapReduce parallel framework, so as to exploit a more efficient and effective general parallel MLCS algorithm. FACC adopts the ideas of matched pairs and finite automaton by preprocessing sequences, constructing successor tables, and common subsequences finite automaton to search for MLCS. Simulation experiments on a set of benchmarks from both real DNA and amino acid sequences have been conducted and the results show that the proposed FACC algorithm outperforms the current leading parallel MLCS algorithm FAST-MLCS.

2018 ◽  
Vol 7 (4.6) ◽  
pp. 13
Author(s):  
Mekala Sandhya ◽  
Ashish Ladda ◽  
Dr. Uma N Dulhare ◽  
. . ◽  
. .

In this generation of Internet, information and data are growing continuously. Even though various Internet services and applications. The amount of information is increasing rapidly. Hundred billions even trillions of web indexes exist. Such large data brings people a mass of information and more difficulty discovering useful knowledge in these huge amounts of data at the same time. Cloud computing can provide infrastructure for large data. Cloud computing has two significant characteristics of distributed computing i.e. scalability, high availability. The scalability can seamlessly extend to large-scale clusters. Availability says that cloud computing can bear node errors. Node failures will not affect the program to run correctly. Cloud computing with data mining does significant data processing through high-performance machine. Mass data storage and distributed computing provide a new method for mass data mining and become an effective solution to the distributed storage and efficient computing in data mining. 


Author(s):  
Patrick Dreher ◽  
Mladen Vouk

This chapter describes an economical and scalable open source cloud computing technology suitable for a university environment where the need is to simultaneously serve a diverse spectrum of educational and research missions. In particular, this chapter reviews a cloud computing technology called Virtual Computing Lab (VCL, http://vcl.ncsu.edu). This open source technology was originally designed and built at North Carolina State University (NCSU), and its seamlessly supports both the electronic teaching and learning needs of the university as well as a robust environment for faculty high performance computing research needs. Extensive data from the NCSU VCL production system show the economic scalability of the solutions. The authors discuss economic viability of the solution and the trade-off analysis that needs to be done to understand how much of the equipment, virtualization, and workload balancing among on-demand and background workloads need to happen.


2020 ◽  
Vol 17 (9) ◽  
pp. 4411-4418
Author(s):  
S. Jagannatha ◽  
B. N. Tulasimala

In the world of information communication technology (ICT) the term Cloud Computing has been the buzz word. Cloud computing is changing its definition the way technocrats are using it according to the environment. Cloud computing as a definition remains very contentious. Definition is stated liable to a particular application with no unanimous definition, making it altogether elusive. In spite of this, it is this technology which is revolutionizing the traditional usage of computer hardware, software, data storage media, processing mechanism with more of benefits to the stake holders. In the past, the use of autonomous computers and the nodes that were interconnected forming the computer networks with shared software resources had minimized the cost on hardware and also on the software to certain extent. Thus evolutionary changes in computing technology over a few decades has brought in the platform and environment changes in machine architecture, operating system, network connectivity and application workload. This has made the commercial use of technology more predominant. Instead of centralized systems, parallel and distributed systems will be more preferred to solve computational problems in the business domain. These hardware are ideal to solve large-scale problems over internet. This computing model is data-intensive and networkcentric. Most of the organizations with ICT used to feel storing of huge data, maintaining, processing of the same and communication through internet for automating the entire process a challenge. In this paper we explore the growth of CC technology over several years. How high performance computing systems and high throughput computing systems enhance computational performance and also how cloud computing technology according to various experts, scientific community and also the service providers is going to be more cost effective through different dimensions of business aspects.


Author(s):  
Kiran Kumar S V N Madupu

Big Data has terrific influence on scientific discoveries and also value development. This paper presents approaches in data mining and modern technologies in Big Data. Difficulties of data mining as well as data mining with big data are discussed. Some technology development of data mining as well as data mining with big data are additionally presented.


TAPPI Journal ◽  
2018 ◽  
Vol 17 (09) ◽  
pp. 507-515 ◽  
Author(s):  
David Skuse ◽  
Mark Windebank ◽  
Tafadzwa Motsi ◽  
Guillaume Tellier

When pulp and minerals are co-processed in aqueous suspension, the mineral acts as a grinding aid, facilitating the cost-effective production of fibrils. Furthermore, this processing allows the utilization of robust industrial milling equipment. There are 40000 dry metric tons of mineral/microfbrillated (MFC) cellulose composite production capacity in operation across three continents. These mineral/MFC products have been cleared by the FDA for use as a dry and wet strength agent in coated and uncoated food contact paper and paperboard applications. We have previously reported that use of these mineral/MFC composite materials in fiber-based applications allows generally improved wet and dry mechanical properties with concomitant opportunities for cost savings, property improvements, or grade developments and that the materials can be prepared using a range of fibers and minerals. Here, we: (1) report the development of new products that offer improved performance, (2) compare the performance of these new materials with that of a range of other nanocellulosic material types, (3) illustrate the performance of these new materials in reinforcement (paper and board) and viscosification applications, and (4) discuss product form requirements for different applications.


2011 ◽  
Vol 39 (3) ◽  
pp. 193-209 ◽  
Author(s):  
H. Surendranath ◽  
M. Dunbar

Abstract Over the last few decades, finite element analysis has become an integral part of the overall tire design process. Engineers need to perform a number of different simulations to evaluate new designs and study the effect of proposed design changes. However, tires pose formidable simulation challenges due to the presence of highly nonlinear rubber compounds, embedded reinforcements, complex tread geometries, rolling contact, and large deformations. Accurate simulation requires careful consideration of these factors, resulting in the extensive turnaround time, often times prolonging the design cycle. Therefore, it is extremely critical to explore means to reduce the turnaround time while producing reliable results. Compute clusters have recently become a cost effective means to perform high performance computing (HPC). Distributed memory parallel solvers designed to take advantage of compute clusters have become increasingly popular. In this paper, we examine the use of HPC for various tire simulations and demonstrate how it can significantly reduce simulation turnaround time. Abaqus/Standard is used for routine tire simulations like footprint and steady state rolling. Abaqus/Explicit is used for transient rolling and hydroplaning simulations. The run times and scaling data corresponding to models of various sizes and complexity are presented.


Sign in / Sign up

Export Citation Format

Share Document