scholarly journals Graph Based Workload Driven Partitioning System by using MongoDB

Author(s):  
Arvind Sahu ◽  
Swati Ahirrao

<p>The web applications and websites of the enterprises are accessed by a huge number of users with the expectation of reliability and high availability. Social networking sites are generating the data exponentially large amount of data. It is a challenging task to store data efficiently. SQL and NoSQL are mostly used to store data. As RDBMS cannot handle the unstructured data and huge volume of data, so NoSQL is better choice for web applications. Graph database is one of the efficient ways to store data in NoSQL. Graph database allows us to store data in the form of relation. In Graph representation each tuple is represented by node and the relationship is represented by edge. But, to handle the exponentially growth of data into a single server might decrease the performance and increases the response time. Data partitioning is a good choice to maintain a moderate performance even the workload increases. There are many data partitioning techniques like Range, Hash and Round robin but they are not efficient for the small transactions that access a less number of tuples. NoSQL data stores provide scalability and availability by using various partitioning methods. To access the Scalability, Graph partitioning is an efficient way that can be easily represent and process that data. To balance the load data are partitioned horizontally and allocate data across the geographical available data stores. If the partitions are not formed properly result becomes expensive distributed transactions in terms of response time. So the partitioning of the tuple should be based on relation. In proposed system, Schism technique is used for partitioning the Graph. Schism is a workload aware graph partitioning technique. After partitioning the related tuples should come into a single partition. The individual node from the graph is mapped to the unique partition. The overall aim of Graph partitioning is to maintain nodes onto different distributed partition so that related data come onto the same cluster.</p>

Author(s):  
Anagha Bhunje ◽  
Swati Ahirrao

<p><span lang="EN-US">Numerous applications are deployed on the web with the increasing popularity of internet. The applications include, 1) Banking applications,<br /> 2) Gaming applications, 3) E-commerce web applications. Different applications reply on OLTP (Online Transaction Processing) systems. OLTP systems need to be scalable and require fast response. Today modern web applications generate huge amount of the data which one particular machine and Relational databases cannot handle. The E-Commerce applications are facing the challenge of improving the scalability of the system. Data partitioning technique is used to improve the scalability of the system. The data is distributed among the different machines which results in increasing number of transactions. The work-load aware incremental repartitioning approach is used to balance the load among the partitions and to reduce the number of transactions that are distributed in nature. Hyper Graph Representation technique is used to represent the entire transactional workload in graph form. In this technique, frequently used items are collected and Grouped by using Fuzzy C-means Clustering Algorithm. Tuple Classification and Migration Algorithm is used for mapping clusters to partitions and after that tuples are migrated efficiently.</span></p>


Author(s):  
Saifuddin Saifuddin ◽  
Royyana Muslim Ijtihadie ◽  
Baskoro Adi Pratomo

A large part of the service provider's website using an operating system Linux, when one of the websites in the Shared web can be taken over, most likely other websites will also be mastered by reading config connecting to the database, the mechanism used to read a config file with the command in linux by default is available, using the command “ln -s” also known by the term “symlink” who can read the directory where the web, although different config directory.The results show config on web applications that are in the directory in a single server can be read using these methods but can not be decoded to read user, password, and dbname, because it has given authorization can be decoded only from the directory already listed. on testing performance for latency, memory, and CPU system be followed, to get good results the previous system. The test results using the cache, the response time generated when accessed simultaneously by 20 click per user amounted to 941.4 ms for the old system and amounted to 786.6 ms.


2018 ◽  
Vol 30 (3) ◽  
pp. 328-338 ◽  
Author(s):  
Maria Bertling ◽  
Jonathan P. Weeks

Author(s):  
Amit Sharma

Distributed Denial of Service attacks are significant dangers these days over web applications and web administrations. These assaults pushing ahead towards application layer to procure furthermore, squander most extreme CPU cycles. By asking for assets from web benefits in gigantic sum utilizing quick fire of solicitations, assailant robotized programs use all the capacity of handling of single server application or circulated environment application. The periods of the plan execution is client conduct checking and identification. In to beginning with stage by social affair the data of client conduct and computing individual user’s trust score will happen and Entropy of a similar client will be ascertained. HTTP Unbearable Load King (HULK) attacks are also evaluated. In light of first stage, in recognition stage, variety in entropy will be watched and malevolent clients will be recognized. Rate limiter is additionally acquainted with stop or downsize serving the noxious clients. This paper introduces the FAÇADE layer for discovery also, hindering the unapproved client from assaulting the framework.


Author(s):  
Faried Effendy ◽  
Taufik ◽  
Bramantyo Adhilaksono

: Substantial research has been conducted to compare web servers or to compare databases, but very limited research combines the two. Node.js and Golang (Go) are popular platforms for both web and mobile application back-ends, whereas MySQL and Go are among the best open source databases with different characters. Using MySQL and MongoDB as databases, this study aims to compare the performance of Go and Node.js as web applications back-end regarding response time, CPU utilization, and memory usage. To simulate the actual web server workload, the flow of data traffic on the server follows the Poisson distribution. The result shows that the combination of Go and MySQL is superior in CPU utilization and memory usage, while the Node.js and MySQL combination is superior in response time.


Author(s):  
Ágnes Bogárdi-Mészöly ◽  
Zoltán Szitás ◽  
Tihamér Levendovszky ◽  
Hassan Charaf

2021 ◽  
Vol 155 (A2) ◽  
Author(s):  
R Brown ◽  
E R Galea ◽  
S Deere ◽  
L Filippidis

The paper consists of 27 figures; numerous equations and 12 notes/ references, many of which are written by the authors of this paper. Whilst this may indicate a lack of “reading around the subject” it also indicates the unique nature of the topic and that little exists at present in the public domain about this topic. Indeed the authors and the research group they represent are the main contributors to the IMOs discussions and circulars on this subject. Given that background the paper is very detailed and consists of comparisons between the evacuation times of 3 passenger ships, 2 being Ro-Pax vessels and 1 a cruise liner. On board evacuation time statistics have been gathered from significant populations enabling the authors to draw significant conclusions relating to evacuation times in the presented scenarios. The paper is therefore a useful addition to the debates on this subject which is of major relevance to the understanding of evacuation times in passenger vessels. Data and research in this area is difficult to obtain thus the authors should be congratulated for their work.


Author(s):  
Jibitesh Mishra ◽  
Kabita Rani Naik

Web 2.0 is a new generation of web applications where the users are able to participate, collaborate and share the created artefacts. Web 2.0 is all about the collective intelligence. Web 2.0 applications are widely used for all the educational, professional, business and entertainment purposes. But a methodology for quantitative evaluation of web2.0 application quality is still not available. With the advancement of web technology various dimensions to evaluate web2.0 application quality is changing. So studies will be made to select a quality model that is required for web 2.0 application. Then the quantitative analysis will be done on the basis of questionnaire method and statistical formula. Quantitative analysis is necessary to know the weakness and strength of a website and then to improve the web quality. Quantitative evaluation can also be used for comparing two or more websites. In this study, quantitative analysis is done for each quality attribute of two social networking sites. Then the two sites are compared on the basis of the quantitative value of quality.


Sign in / Sign up

Export Citation Format

Share Document