scholarly journals Exploiting Cloud Computing and Web Services to Achieve Data Consistency, Availability, and Partition Tolerance in the Large-Scale Pervasive Systems

Author(s):  
Ashraf Ahmed Fadelelmoula

This article presents a new comprehensive approach to realize a sufficient trade-off between the CAP properties (i.e., consistency, availability, and partition tolerance) in the large-scale pervasive information systems. To achieve these critical properties, the capabilities of both cloud computing and web services were exploited in developing the components of the proposed approach. These components include a cloud-based replication architecture for ensuring high data availability and achieving partition tolerance, a web services-based middleware for maintaining the eventual consistency, and a data caching scheme to enable the mobile computing elements to conduct  update transactions during the disconnection periods.  The evaluation of the performance aspects revealed that the proposed approach is able to achieve a load balance, lower propagation delay, and higher cache hit ratio, as compared to other baseline approaches.

Author(s):  
Dr. Manish Jivtode

Cloud computing is viewed as one of the most promising technologies in computing today. This is a new concept of large scale distributed computing. It provides an open platform for every user on the pay-per-use basis. Cloud computing provides number of interfaces and APIs to interact with the services provided to the users. With the development of web services distributed application, Security of data is another important subject in various layers of distributed computing. In this study, security of data that can be used during the access of distributed environment over various layers will be described.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Xing Guo ◽  
Shanshan Chen ◽  
Yiwen Zhang ◽  
Wei Li

Web service composition is one of the core technologies of realizing service-oriented computing. Web service composition satisfies the requirements of users to form new value-added services by composing existing services. As Cloud Computing develops, the emergence of Web services with different quality yet similar functionality has brought new challenges to service composition optimization problem. How to solve large-scale service composition in the Cloud Computing environment has become an urgent problem. To tackle this issue, this paper proposes a parallel optimization approach based on Spark distributed environment. Firstly, the parallel covering algorithm is used to cluster the Web services. Next, the multiple clustering centers obtained are used as the starting point of the particles to improve the diversity of the initial population. Then, according to the parallel data coding rules of resilient distributed dataset (RDD), the large-scale combination service is generated with the proposed algorithm named Spark Particle Swarm Optimization Algorithm (SPSO). Finally, the usage of particle elite selection strategy removes the inert particles to optimize the performance of the combination of service selection. This paper adopts real data set WS-Dream to prove the validity of the proposed method with a large number of experimental results.


2014 ◽  
Vol 624 ◽  
pp. 553-556
Author(s):  
Jing Bo Yang ◽  
Shu Huang ◽  
Pan Jiang

With the development of cloud computing, data center is also improved. cloud computing data center contains hundreds, even million of servers or PCs. It has many heterogeneous resources. Data center is a key to promise high scalability and resource usage of cloud computing. In addition, replica is introduced into data center, which is an important method to improve availability and performance. In this paper, the research on distributed storage algorithm based on the cloud computing. This algorithm uses the design of system storage level indicators within classification of massive data storage mechanism to solve the allocation problem of data consistency between the data center; and send communication packets between data centers through the cloud computing. The full storage can achieve complete local storage of each data stream, and solve the original data stream unusually large-scale data storage allocation problem.


Author(s):  
Mahdy Saedy ◽  
Brian Kelley

Clock synchronization is an important requirement of wireless sensor networks (WSNs). Synchronization is crucial to maintain data consistency, coordination, and perform fundamental operations. Many application scenarios exist where external clock synchronization may be required because WSN itself may not consist of an infrastructure for distributing the clock reference. In distributed systems the clock of a reference node is synchronized with GPS time tag or UTC as conventional external clock sources. The rest of the nodes estimate the offset and drift based on a synchronization protocol. For vast WSN, where the topology introduces propagation delay and fast drift rate of clock over sampling periods, synchronizing the WSN nodes and maintaining the synchronization is difficult. To maintain an accurate synchronization across the WSN, the authors propose a cooperative synchronization method, which uses Constant Amplitude Zero Auto Correlation (CAZAC) sequences for OFDM symbols. The proposed method is part of a class of distributed methods known as Gossip or Consensus. These protocols are robust and self-correcting to topology changes and link failure. In this paper, the authors introduce a specific type of power-law topology called scale-free and compare the synchronization performance of the proposed method in random and scale-free topologies.


Author(s):  
Zulaile Mabni ◽  
Rohaya Latip ◽  
Hamidah Ibrahim ◽  
Azizol Abdullah

Data replication is widely used to provide high data availability, and increase the performance of the distributed systems. Many replica control protocols have been proposed in distributed and grid environments that achieved both high performance and availability. However, the previously proposed protocols still require a bigger number of replicas for read and write operations which are not suitable for a large scale system such as data grid. In this paper, a new replica control protocol called Clusteringbased Hybrid (CBH) has been proposed for managing the data in grid environments. We analyzed the communication cost and data availability for the operations and compared CBH protocol with recently proposed replica control protocols called Dynamic Hybrid (DH) protocol and Diagonal Replication in 2D Mesh (DR2M) protocol. To evaluate CBH protocol, a simulation model was implemented using Java. Our results show that for the read operations, CBH protocol improves the performance of communication cost and data availability compared to the DH and DR2M protocols.  


10.14311/300 ◽  
2002 ◽  
Vol 42 (1) ◽  
Author(s):  
V. Dynda ◽  
P. Rydlo

This paper deals with design issues of a global file system, aiming to provide transparent data availability, security against loss and disclosure, and support for mobile and disconnected clients.First, the paper surveys general challenges and requirements for large-scale file systems, and then the design of particular elementary parts of the proposed file system is presented. This includes the design of the raw system architecture, the design of dynamic file replication with appropriate data consistency, file location and data security.Our proposed system is called Gaston, and will be referred further in the text under this name or its abbreviation GFS (Gaston File System).


2018 ◽  
Vol 31 (5-6) ◽  
pp. 227-233
Author(s):  
Weitao Wang ◽  
◽  
Baoshan Wang ◽  
Xiufen Zheng ◽  

2020 ◽  
Vol 47 (3) ◽  
pp. 547-560 ◽  
Author(s):  
Darush Yazdanfar ◽  
Peter Öhman

PurposeThe purpose of this study is to empirically investigate determinants of financial distress among small and medium-sized enterprises (SMEs) during the global financial crisis and post-crisis periods.Design/methodology/approachSeveral statistical methods, including multiple binary logistic regression, were used to analyse a longitudinal cross-sectional panel data set of 3,865 Swedish SMEs operating in five industries over the 2008–2015 period.FindingsThe results suggest that financial distress is influenced by macroeconomic conditions (i.e. the global financial crisis) and, in particular, by various firm-specific characteristics (i.e. performance, financial leverage and financial distress in previous year). However, firm size and industry affiliation have no significant relationship with financial distress.Research limitationsDue to data availability, this study is limited to a sample of Swedish SMEs in five industries covering eight years. Further research could examine the generalizability of these findings by investigating other firms operating in other industries and other countries.Originality/valueThis study is the first to examine determinants of financial distress among SMEs operating in Sweden using data from a large-scale longitudinal cross-sectional database.


2020 ◽  
Vol 29 (2) ◽  
pp. 1-24
Author(s):  
Yangguang Li ◽  
Zhen Ming (Jack) Jiang ◽  
Heng Li ◽  
Ahmed E. Hassan ◽  
Cheng He ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document