scholarly journals Distributed Database System Optimization for Improved Service Delivery in Mobile and Cloud BigData Applications

Author(s):  
O.T Jinadu ◽  
O.V. Johnson ◽  
M. Ganiyu

Many issues associated with managing centralized database include data isolation, redundancy, inconsistency, and atomicity of updates, among others; however, distributed database implementation over high-performance compute nodes maximizes information value across the networks. Also, analysis of bigdata generated/consumed over the mobile Internet, Internet of Things (IoT) and cloud computations necessitates low-latency reads and updates over cloud clusters. Conventionally, services in distributed systems demand optimized transactions. This paper examines transaction generation over distributed storage pool using suggested reference architectures of fragmentation using hybrid semi-join operations to offer mobility transparency as an additional ingredient of integrity transparency offer of DDBMS. Distributed storage pool is simulated using configured WLAN to activate multiple file transfers concurrently, engaging mobile nodes and large file sizes. Major functionality desired in the storage pool is improvised by storage virtualization whereby a global schema query optimizer effects transaction management to characterized latency-driven throughputs achieved by joint optimization of network and storage virtualization. Measurements and evaluations gave the best overall performance of low-latency reads and updates using the provisioned mobile-transmission control protocol (M-TCP). Appreciable improvement in service delivery is offered using distributed storage pool (DSP) facilitated with hybridized RAID construction and copy mechanisms. Improved response-time and speed-up transmissions evidently showed low-latency read and update transactions, depicting improved service delivery. Evaluating the DDBMS model simulated in the DSP architecture, all complexity (overheads) associated with conventional shared systems were minimized.

2017 ◽  
Vol 898 ◽  
pp. 082045 ◽  
Author(s):  
R Ammendola ◽  
A Biagioni ◽  
P Cretaro ◽  
O Frezza ◽  
F Lo Cicero ◽  
...  

2013 ◽  
Vol 380-384 ◽  
pp. 421-424
Author(s):  
Jing Liu ◽  
Yu Chi Zhao ◽  
Xiao Hua Shi ◽  
Su Juan Liu

In recent years, it is a very active direction of research to use neural network to control computer. Neural network is a burgeoning crossing subject, and the way it processes information is different from the past symbolic logic system, which has some unique properties: such as the distributed storage and parallel processing of information, the unity of the information storage and information processing, and have the ability of self-organizing and self-learning. And it has been applied widespread in pattern recognition, signal processing, knowledge process, expert system, optimization, intelligent control and so on. Using neural network can deal with some problems such as complicated environment information, fuzzy background knowledge and undefined inference rules, and it allows samples to have relatively large defects and distortion, so it is a very good choice to adopt the recognizing method of neural network. This thesis discusses the application of neural network in computer control.


2013 ◽  
Vol 5 (1) ◽  
pp. 53-69
Author(s):  
Jacques Jorda ◽  
Aurélien Ortiz ◽  
Abdelaziz M’zoughi ◽  
Salam Traboulsi

Grid computing is commonly used for large scale application requiring huge computation capabilities. In such distributed architectures, the data storage on the distributed storage resources must be handled by a dedicated storage system to ensure the required quality of service. In order to simplify the data placement on nodes and to increase the performance of applications, a storage virtualization layer can be used. This layer can be a single parallel filesystem (like GPFS) or a more complex middleware. The latter is preferred as it allows the data placement on the nodes to be tuned to increase both the reliability and the performance of data access. Thus, in such a middleware, a dedicated monitoring system must be used to ensure optimal performance. In this paper, the authors briefly introduce the Visage middleware – a middleware for storage virtualization. They present the most broadly used grid monitoring systems, and explain why they are not adequate for virtualized storage monitoring. The authors then present the architecture of their monitoring system dedicated to storage virtualization. We introduce the workload prediction model used to define the best node for data placement, and show on a simple experiment its accuracy.


Author(s):  
G. Latha

Blockchain system store transaction data in the form of a distributed database where each peer is to maintain an identical copy. Blockchain systems resemble repetition codes, incurring high storage cost. Recently, distributed storage blockchain (DSB) systems have been proposed to improve storage efficiency by incorporating secret sharing, private key encryption, and information dispersal algorithms. However, the DSB results in significant communication cost when peer failures occur due to denial of service attacks. In this project, we propose a new DSB approach based on a local secret sharing (LSS) scheme with a hierarchical secret structure of one global secret node and several local secret nodes. The proposed DSB approach with LSS improves the storage and recovery communication costs.


Author(s):  
V. Khashchanskiy ◽  
A. Kustov ◽  
J. Lang

Providing mobile Internet access in GPRS and UMTS networks is not an easy task. The main problem is in rather challenging network conditions (Inamura, Montenegro, Ludwig, Gurtov, & Khafizov, 2003). Latency in these networks could be an order of magnitude higher than in wired networks, with round-trip time (RTT) reaching up to one second. Moreover, there occur delay spikes in the network, when latency can exceed average RTT several times (Gurtov, 2004). Furthermore, in wireless networks, the risk of experiencing packet losses is considerably higher in comparison to that in wired networks. This is because packets can easily be lost due to corruption, either during deep fading leading to burst losses, or cell reselections, resulting in a link black-out condition. Such characteristics of wireless cellular networks significantly affect performance of the principal Internet protocol—TCP—as it was designed to work in conditions of low-latency reliable networks.


Author(s):  
Bhaskar Sardar ◽  
Debashis Saha

Transmission Control Protocol (TCP), the most popular transport layer communication protocol for the Internet, was originally designed for wired networks, where bit error rate (BER) is low and congestion is the primary cause of packet loss. Since mobile access networks are prone to substantial noncongestive losses due to high BER, host motion and handoff mechanisms, they often disturb the traffic control mechanisms in TCP. So the research literature abounds in various TCP enhancements to make it survive in the mobile Internet environment, where mobile devices face temporary and unannounced loss of network connectivity when they move. Mobility of devices causes varying, increased delays and packet losses. TCP incorrectly interprets these delays and losses as sign of network congestion and invokes unnecessary control mechanisms, causing degradation in the end-to-end good put rate. This chapter provides an in-depth survey of various TCP enhancements which aim to redress the above issues and hence are specifically targeted for the mobile Internet applications.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5737
Author(s):  
Fátima Fernández ◽  
Mihail Zverev ◽  
Pablo Garrido ◽  
José R. Juárez ◽  
Josu Bilbao ◽  
...  

In this paper we analyze the performance of QUIC as a transport alternative for Internet of Things (IoT) services based on the Message Queuing Telemetry Protocol (MQTT). QUIC is a novel protocol promoted by Google, and was originally conceived to tackle the limitations of the traditional Transmission Control Protocol (TCP), specifically aiming at the reduction of the latency caused by connection establishment. QUIC use in IoT environments is not widespread, and it is therefore interesting to characterize its performance when in over such scenarios. We used an emulation-based platform, where we integrated QUIC and MQTT (using GO-based implementations) and compared their combined performance with the that exhibited by the traditional TCP/TLS approach. We used Linux containers as end devices, and the ns-3 simulator to emulate different network technologies, such as WiFi, cellular, and satellite, and varying conditions. The results evince that QUIC is indeed an appropriate protocol to guarantee robust, secure, and low latency communications over IoT scenarios.


2011 ◽  
pp. 211-241 ◽  
Author(s):  
Sushant Goel ◽  
Rajkumar Buyya

Effective data management in today’s competitive enterprise environment is an important issue. Data is information, and information is knowledge. Hence, fast and effective access to data is very important. Replication is one widely accepted phenomenon in distributed environments, where data is stored at more than one site for performance and reliability reasons. Applications and architectures of distributed computing have changed drastically during the last decade, and so have replication protocols. Different replication protocols may be suitable for different applications. In this chapter, we present a survey of replication algorithms for different distributed storage and content-management systems including distributed database-management systems, service-oriented data grids, peer-to-peer (P2P) systems, and storage area networks. We discuss the replication algorithms of more recent architectures, data grids and P2P systems, in detail. We briefly discuss replication in storage area networks and on the Internet.


Author(s):  
Tien-Chien Chen ◽  
James V. Krogmeier ◽  
Darcy M. Bullock

Most transportation agencies and departments have deployed transmission control protocol–Internet protocol (TCP-IP) applications in their offices and are beginning to deploy TCP-IP applications in remote or satellite field offices. Deployment of TCP-IP applications where broadband access is not available can be quite challenging. Satellite-based communication offers an opportunity to provide high-bandwidth connections quickly. However, satellite communications incur significant travel time delay that may result in poor performance of applications designed for a low-latency environment. This paper presents an evaluation of the AASHTO SiteManager software suite with two different satellite broadband providers. SiteManager clients performed poorly in the high-latency environment, in some cases up to 50 times slower than with SiteManager running on a low-latency terrestrial network with equivalent bandwidth. In general, the performance of SiteManager was relatively insensitive to the bandwidth provided by the satellite provider. In most tasks, SiteManager performed better over a 50-kbps dial-up connection than over a 384-kbps satellite connection. In an alternative architecture in which SiteManager was operated remotely via a terminal emulation service over a satellite connection, the performance was observed to be robust. This architecture requires considerably more equipment, software, and technical support. Furthermore, the delay in seeing some key-strokes and cursor movements appear can be somewhat awkward for the user. However, given the extensive bursts of short messages between SiteManager clients and the server, the high-latency constraints of a satellite network make a terminal emulation procedure the only viable method of deploying SiteManager via a commercial satellite IP service.


Sign in / Sign up

Export Citation Format

Share Document