Reusing Transaction Models for Dependable Cloud Computing

Author(s):  
Barbara Gallina ◽  
Nicolas Guelfi

Cloud computing represents a technological change in computing. Despite the technological change, however, the quality of the computation, in particular its dependability, remains a fundamental requirement. To ensure dependability, and more specifically, reliability, transaction models represent an effective means. In the literature, several transaction models exist. Choosing (reusing entirely) or introducing (reusing partially) transaction models for cloud computing is not an easy task. The difficulty of this task is due to the fact that it requires a deep understanding of the properties that characterize transaction models to be able to discriminate reusable from non reusable properties with respect to cloud computing characteristics. To ease this task, the PRISMA process is introduced. PRISMA is a Process for Requirements Identification, Specification and Machine-supported Analysis that targets transaction models. PRISMA is then applied to engineer reusable requirements suitable for the achievement of the adequate transaction models for cloud computing.

Author(s):  
Neha Thakur ◽  
Aman Kumar Sharma

Cloud computing has been envisioned as the definite and concerning solution to the rising storage costs of IT Enterprises. There are many cloud computing initiatives from IT giants such as Google, Amazon, Microsoft, IBM. Integrity monitoring is essential in cloud storage for the same reasons that data integrity is critical for any data centre. Data integrity is defined as the accuracy and consistency of stored data, in absence of any alteration to the data between two updates of a file or record.  In order to ensure the integrity and availability of data in Cloud and enforce the quality of cloud storage service, efficient methods that enable on-demand data correctness verification on behalf of cloud users have to be designed. To overcome data integrity problem, many techniques are proposed under different systems and security models. This paper will focus on some of the integrity proving techniques in detail along with their advantages and disadvantages.


Author(s):  
. Monika ◽  
Pardeep Kumar ◽  
Sanjay Tyagi

In Cloud computing environment QoS i.e. Quality-of-Service and cost is the key element that to be take care of. As, today in the era of big data, the data must be handled properly while satisfying the request. In such case, while handling request of large data or for scientific applications request, flow of information must be sustained. In this paper, a brief introduction of workflow scheduling is given and also a detailed survey of various scheduling algorithms is performed using various parameter.


Author(s):  
Edgars Rencis ◽  
Janis Barzdins ◽  
Sergejs Kozlovics

Towards Open Graphical Tool-Building Framework Nowadays, there are many frameworks for developing domain-specific tools. However, if we want to create a really sophisticated tool with specific functionality requirements, it is not always an easy task to do. Although tool-building platforms offer some means for extending the tool functionality and accessing it from external applications, it usually requires a deep understanding of various technical implementation details. In this paper we try to go one step closer to a really open graphical tool-building framework that would allow both to change the behavior of the tool and to access the tool from the outside easily. We start by defining a specialization of metamodels which is a great and powerful facility itself. Then we go on and show how this can be applied in the field of graphical domain-specific tool building. The approach is demonstrated on an example of a subset of UML activity diagrams. The benefits of the approach are also clearly indicated. These include a natural and intuitive definition of tools, a strict logic/presentation separation and the openness for extensions as well as for external applications.


2018 ◽  
Vol 1 ◽  
pp. 198
Author(s):  
Lusy Tunik Muharlisiani ◽  
Henny Sukrisno ◽  
Emmy Wahyuningtyas ◽  
Shofiya Syidada ◽  
Dina Chamidah

Service at the “Kelurahan” is a very important part in determining the success of development, especially in public service. The problem faced is the lack of skill level of the “Kelurahan” apparatus with the more dynamic demands of the community and the archive management system is still conventional and manual that is writing the identity of the archive into the book agenda, expedition, control card, and borrowed archive card, so it takes a more practical electronic system, effective and efficient so required to develop themselves in order to improve public services. Conventional administration and archive management must be transformed into cloud-based computing (digital), for which archiving managers should always be responsive and follow these developments and wherever possible in order to utilize for archival activities, with greater access expected archives are evidence at once able to talk about historical facts and events and be able to give meaning and benefit to human life, so archives that were only visible and readable at archival centers can now be accessed online, and even their services have led to automated service systems. Using Microsoft Access which its main function is to handle the process of data manipulation and manufacture of a system, this system is built so that the bias runs on Cloud which means Cloud itself is a paradigm in which information is permanently stored on servers on the internet and stored. The purpose of this program is the implementation of administrative management that has been based cloud computing (digital) and is expected to be a solution in managing the archive so that if it has been designed and programmed, it can be stored in the computer and benefi- cial to the “Kelurahan” apparatus and add in the field of management archives in the form of improving the quality of service to the community, can facilitate and scientific publications.


Author(s):  
Ge Weiqing ◽  
Cui Yanru

Background: In order to make up for the shortcomings of the traditional algorithm, Min-Min and Max-Min algorithm are combined on the basis of the traditional genetic algorithm. Methods: In this paper, a new cloud computing task scheduling algorithm is proposed, which introduces Min-Min and Max-Min algorithm to generate initialization population, and selects task completion time and load balancing as double fitness functions, which improves the quality of initialization population, algorithm search ability and convergence speed. Results: The simulation results show that the algorithm is superior to the traditional genetic algorithm and is an effective cloud computing task scheduling algorithm. Conclusion: Finally, this paper proposes the possibility of the fusion of the two quadratively improved algorithms and completes the preliminary fusion of the algorithm, but the simulation results of the new algorithm are not ideal and need to be further studied.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1400
Author(s):  
Muhammad Adnan ◽  
Jawaid Iqbal ◽  
Abdul Waheed ◽  
Noor Ul Amin ◽  
Mahdi Zareei ◽  
...  

Modern vehicles are equipped with various sensors, onboard units, and devices such as Application Unit (AU) that support routing and communication. In VANETs, traffic management and Quality of Service (QoS) are the main research dimensions to be considered while designing VANETs architectures. To cope with the issues of QoS faced by the VANETs, we design an efficient SDN-based architecture where we focus on the QoS of VANETs. In this paper, QoS is achieved by a priority-based scheduling algorithm in which we prioritize traffic flow messages in the safety queue and non-safety queue. In the safety queue, the messages are prioritized based on deadline and size using the New Deadline and Size of data method (NDS) with constrained location and deadline. In contrast, the non-safety queue is prioritized based on First Come First Serve (FCFS) method. For the simulation of our proposed scheduling algorithm, we use a well-known cloud computing framework CloudSim toolkit. The simulation results of safety messages show better performance than non-safety messages in terms of execution time.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 864
Author(s):  
Qingzheng Xu ◽  
Na Wang ◽  
Lei Wang ◽  
Wei Li ◽  
Qian Sun

Traditional evolution algorithms tend to start the search from scratch. However, real-world problems seldom exist in isolation and humans effectively manage and execute multiple tasks at the same time. Inspired by this concept, the paradigm of multi-task evolutionary computation (MTEC) has recently emerged as an effective means of facilitating implicit or explicit knowledge transfer across optimization tasks, thereby potentially accelerating convergence and improving the quality of solutions for multi-task optimization problems. An increasing number of works have thus been proposed since 2016. The authors collect the abundant specialized literature related to this novel optimization paradigm that was published in the past five years. The quantity of papers, the nationality of authors, and the important professional publications are analyzed by a statistical method. As a survey on state-of-the-art of research on this topic, this review article covers basic concepts, theoretical foundation, basic implementation approaches of MTEC, related extension issues of MTEC, and typical application fields in science and engineering. In particular, several approaches of chromosome encoding and decoding, intro-population reproduction, inter-population reproduction, and evaluation and selection are reviewed when developing an effective MTEC algorithm. A number of open challenges to date, along with promising directions that can be undertaken to help move it forward in the future, are also discussed according to the current state. The principal purpose is to provide a comprehensive review and examination of MTEC for researchers in this community, as well as promote more practitioners working in the related fields to be involved in this fascinating territory.


Author(s):  
Tianqi Jing ◽  
Shiwen He ◽  
Fei Yu ◽  
Yongming Huang ◽  
Luxi Yang ◽  
...  

AbstractCooperation between the mobile edge computing (MEC) and the mobile cloud computing (MCC) in offloading computing could improve quality of service (QoS) of user equipments (UEs) with computation-intensive tasks. In this paper, in order to minimize the expect charge, we focus on the problem of how to offload the computation-intensive task from the resource-scarce UE to access point’s (AP) and the cloud, and the density allocation of APs’ at mobile edge. We consider three offloading computing modes and focus on the coverage probability of each mode and corresponding ergodic rates. The resulting optimization problem is a mixed-integer and non-convex problem in the objective function and constraints. We propose a low-complexity suboptimal algorithm called Iteration of Convex Optimization and Nonlinear Programming (ICONP) to solve it. Numerical results verify the better performance of our proposed algorithm. Optimal computing ratios and APs’ density allocation contribute to the charge saving.


2014 ◽  
Vol 571-572 ◽  
pp. 105-108
Author(s):  
Lin Xu

This paper proposes a new framework of combining reinforcement learning with cloud computing digital library. Unified self-learning algorithms, which includes reinforcement learning, artificial intelligence and etc, have led to many essential advances. Given the current status of highly-available models, analysts urgently desire the deployment of write-ahead logging. In this paper we examine how DNS can be applied to the investigation of superblocks, and introduce the reinforcement learning to improve the quality of current cloud computing digital library. The experimental results show that the method works more efficiency.


Sign in / Sign up

Export Citation Format

Share Document