Reinforcement Learning for Cloud Computing Digital Library

2014 ◽  
Vol 571-572 ◽  
pp. 105-108
Author(s):  
Lin Xu

This paper proposes a new framework of combining reinforcement learning with cloud computing digital library. Unified self-learning algorithms, which includes reinforcement learning, artificial intelligence and etc, have led to many essential advances. Given the current status of highly-available models, analysts urgently desire the deployment of write-ahead logging. In this paper we examine how DNS can be applied to the investigation of superblocks, and introduce the reinforcement learning to improve the quality of current cloud computing digital library. The experimental results show that the method works more efficiency.

2021 ◽  
Author(s):  
Samat Ramatullayev ◽  
Shi Su ◽  
Coriolan Rat ◽  
Alaa Maarouf ◽  
Monica Mihai ◽  
...  

Abstract Brownfield field development plans (FDP) must be revisited on a regular basis to ensure the generation of production enhancement opportunities and to unlock challenging untapped reserves. However, for decades, the conventional workflows have remained largely unchanged, inefficient, and time-consuming. The aim of this paper is to demonstrate that combination of the cutting-edge cloud computing technology along with artificial intelligence (AI) and machine learning (ML) solutions enable an optimization plan to be delivered in weeks rather than months with higher confidence. During this FDP optimization process, every stage necessitates the use of smart components (AI & ML techniques) starting from reservoir/production data analytics to history match and forecast. A combined cloud computing and AI solutions are introduced. First, several static and dynamic uncertainty parameters are identified, which are inherited from static modelling and the history match. Second, the elastic cloud computing technology is harnessed to perform hundreds to thousands of history match scenarios with the uncertainty parameters in a much shorter period. Then AI techniques are applied to extract the dominant key features and determine the most likely values. During the FDP optimization process, the data liberation paved the way for intelligent well placement which identifies the "sweet spots" using a probabilistic approach, facilitating the identification and quantification of by-passed oil. The use of AI-assisted analytics revealed how the gas-oil ratio behavior of various wells drilled at various locations in the field changed over time. It also explained why this behavior was observed in one region of the reservoir when another nearby reservoir was not suffering from the same phenomenon. The cloud computing technology allowed to screen hundreds of uncertainty cases using high-resolution reservoir simulator within an hour. The results of the screening runs were fed into an AI optimizer, which produced the best possible combination of uncertainty parameters, resulting in an ensemble of history-matched cases with the lowest mismatch objective functions. We used an intuitive history matching analysis solution that can visualize mismatch quality of all wells of various parameters in an automated manner to determine the history matching quality of an ensemble of cases. Finally, the cloud ecosystem's data liberation capability enabled the implementation of an intelligent algorithm for the identification of new infill wells. The approach serves as a benchmark for optimizing FDP of any reservoir by orders of magnitude faster compared to conventional workflows. The methodology is unique in that it uses cloud computing technology and cutting-edge AI methods to create an integrated intelligent framework for FDP that generates rapid insights and reliable results, accelerates decision making, and speeds up the entire process by orders of magnitude.


Author(s):  
Zhefu Shi ◽  
Cory Beard

Mobile Cloud Computing (MCC) integrates cloud computing into the mobile environment and overcomes obstacles related to performance (e.g., bandwidth, throughput) and environment (e.g., heterogeneity, scalability, and availability). Quality of Service (QoS), such as end-to-end delay, packet loss ratio, etc., is vital for MCC applications. In this chapter, several important approaches for performance evaluation in MCC are introduced. These approaches, such as Markov Processes, Scheduling, and Game Theory, are the most popular methodologies in current research about performance evaluation in MCC. QoS is special in MCC compared to other environments. Important QoS problems with details in MCC and corresponding designs and solutions are explained. This chapter covers the most important research problems and current status related to performance evaluation and QoS in MCC.


Author(s):  
Ziming Li ◽  
Julia Kiseleva ◽  
Maarten De Rijke

The performance of adversarial dialogue generation models relies on the quality of the reward signal produced by the discriminator. The reward signal from a poor discriminator can be very sparse and unstable, which may lead the generator to fall into a local optimum or to produce nonsense replies. To alleviate the first problem, we first extend a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution. Then, in the framework of adversarial inverse reinforcement learning, we propose a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training. We evaluate the performance of the resulting model with automatic metrics and human evaluations in two annotation settings. Our experimental results demonstrate that our model can generate more high-quality responses and achieve higher overall performance than the state-of-the-art.


2015 ◽  
Vol 15 (4) ◽  
pp. 383-412 ◽  
Author(s):  
Renáta Máchová ◽  
Martin Lněnička

Abstract E-government readiness is an important indicator of the quality of a country’s technological and telecommunication infrastructure and the ability of its citizens, businesses and governments to adopt, use and benefit from modern technologies. To measure and compare selected countries, a lot of benchmarking and ranking indices have been introduced since the beginning of the century. With the increasing importance of trends such as cloud computing, open (big) data, participation tools or social media, new indicators and approaches need to be introduced in the measuring of the e-government development, and the existing indices should to be updated, redefined and restructured. Therefore, this article explores the structure of the existing e-government development indices to show the main indicators and trends. Then, it proposes and implements a new framework to evaluate e-government development using these new trends in ICT. It also examines and compares a basic background on the e-government development, benefits and risks of cloud computing, open (big) data and participation tools in the public sector. Based on the newly proposed framework, the e-government development index is calculated for each EU Member State to clearly identify the indicators to have an influence on the e-government development. In the last part, these results are compared to the already existing indices to validate the conformity of the rank methods using Kendall rank correlation coefficient.


2020 ◽  
Vol 23 (6) ◽  
pp. 1172-1191
Author(s):  
Artem Aleksandrovich Elizarov ◽  
Evgenii Viktorovich Razinkov

Recently, such a direction of machine learning as reinforcement learning has been actively developing. As a consequence, attempts are being made to use reinforcement learning for solving computer vision problems, in particular for solving the problem of image classification. The tasks of computer vision are currently one of the most urgent tasks of artificial intelligence. The article proposes a method for image classification in the form of a deep neural network using reinforcement learning. The idea of ​​the developed method comes down to solving the problem of a contextual multi-armed bandit using various strategies for achieving a compromise between exploitation and research and reinforcement learning algorithms. Strategies such as -greedy, -softmax, -decay-softmax, and the UCB1 method, and reinforcement learning algorithms such as DQN, REINFORCE, and A2C are considered. The analysis of the influence of various parameters on the efficiency of the method is carried out, and options for further development of the method are proposed.


2012 ◽  
pp. 695-703
Author(s):  
George Tzanis ◽  
Christos Berberidis ◽  
Ioannis Vlahavas

Machine learning is one of the oldest subfields of artificial intelligence and is concerned with the design and development of computational systems that can adapt themselves and learn. The most common machine learning algorithms can be either supervised or unsupervised. Supervised learning algorithms generate a function that maps inputs to desired outputs, based on a set of examples with known output (labeled examples). Unsupervised learning algorithms find patterns and relationships over a given set of inputs (unlabeled examples). Other categories of machine learning are semi-supervised learning, where an algorithm uses both labeled and unlabeled examples, and reinforcement learning, where an algorithm learns a policy of how to act given an observation of the world.


Author(s):  
George Tzanis ◽  
Christos Berberidis ◽  
Ioannis Vlahavas

Machine learning is one of the oldest subfields of artificial intelligence and is concerned with the design and development of computational systems that can adapt themselves and learn. The most common machine learning algorithms can be either supervised or unsupervised. Supervised learning algorithms generate a function that maps inputs to desired outputs, based on a set of examples with known output (labeled examples). Unsupervised learning algorithms find patterns and relationships over a given set of inputs (unlabeled examples). Other categories of machine learning are semi-supervised learning, where an algorithm uses both labeled and unlabeled examples, and reinforcement learning, where an algorithm learns a policy of how to act given an observation of the world.


2016 ◽  
pp. 2221-2238
Author(s):  
Zhefu Shi ◽  
Cory Beard

Mobile Cloud Computing (MCC) integrates cloud computing into the mobile environment and overcomes obstacles related to performance (e.g., bandwidth, throughput) and environment (e.g., heterogeneity, scalability, and availability). Quality of Service (QoS), such as end-to-end delay, packet loss ratio, etc., is vital for MCC applications. In this chapter, several important approaches for performance evaluation in MCC are introduced. These approaches, such as Markov Processes, Scheduling, and Game Theory, are the most popular methodologies in current research about performance evaluation in MCC. QoS is special in MCC compared to other environments. Important QoS problems with details in MCC and corresponding designs and solutions are explained. This chapter covers the most important research problems and current status related to performance evaluation and QoS in MCC.


2015 ◽  
pp. 1784-1804
Author(s):  
Natalia Kushik ◽  
Jeevan Pokhrel ◽  
Nina Yevtushenko ◽  
Ana Cavalli ◽  
Wissam Mallouli

This paper is devoted to the problem of evaluating the quality of experience (QoE) for a given multimedia service based on the values of service parameters such as QoS indicators. This paper proposes to compare two self learning approaches for predicting the QoE index, namely the approach based on logic circuit learning and the approach based on fuzzy logic expert systems. Experimental results for comparing these two approaches with respect to the prediction ability and the performance are provided.


2019 ◽  
Vol 11 (8) ◽  
pp. 177
Author(s):  
Yong Fang ◽  
Cheng Huang ◽  
Yijia Xu ◽  
Yang Li

With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.


Sign in / Sign up

Export Citation Format

Share Document