scholarly journals Streaming Video Classification Using Machine Learning

2020 ◽  
Vol 17 (4A) ◽  
pp. 677-682
Author(s):  
Adnan Shaout ◽  
Brennan Crispin

This paper presents a method using neural networks and Markov Decision Process (MDP) to identify the source and class of video streaming services. The paper presents the design and implementation of an end-to-end pipeline for training and classifying a machine learning system that can take in packets collected over a network interface and classify the data stream as belonging to one of five streaming video services: You Tube, You Tube TV, Netflix, Amazon Prime, or HBO

2017 ◽  
Vol 7 (1.5) ◽  
pp. 274
Author(s):  
D. Ganesha ◽  
Vijayakumar Maragal Venkatamuni

This research work presents analysis of Modified Sarsa learning algorithm. Modified Sarsa algorithm.  State-Action-Reward-State-Action (SARSA) is an technique for learning a Markov decision process (MDP) strategy, used in for reinforcement learning int the field of artificial intelligence (AI) and machine learning (ML). The Modified SARSA Algorithm makes better actions to get better rewards.  Experiment are conducted to evaluate the performace for each agent individually. For result comparison among different agent, the same statistics were collected. This work considered varied kind of agents in different level of architecture for experiment analysis. The Fungus world testbed has been considered for experiment which is has been implemented using SwI-Prolog 5.4.6. The fixed obstructs tend to be more versatile, to make a location that is specific to Fungus world testbed environment. The various parameters are introduced in an environment to test a agent’s performance. This modified   SARSA learning algorithm can   be more suitable in EMCAP architecture.  The experiments are conducted the modified   SARSA Learning system gets   more rewards compare to existing  SARSA algorithm.


Author(s):  
Marek Laskowski

Science is on the verge of practical agent based modeling decision support systems capable of machine learning for healthcare policy decision support. The details of integrating an agent based model of a hospital emergency department with a genetic programming machine learning system are presented in this paper. A novel GP heuristic or extension is introduced to better represent the Markov Decision Process that underlies agent decision making in an unknown environment. The capabilities of the resulting prototype for automated hypothesis generation within the context of healthcare policy decision support are demonstrated by automatically generating patient flow and infection spread prevention policies. Finally, some observations are made regarding moving forward from the prototype stage.


2021 ◽  
Vol 10 (2) ◽  
pp. 110
Author(s):  
Ruy Lopez-Rios

The paper deals with a discrete-time consumption investment problem with an infinite horizon. This problem is formulated as a Markov decision process with an expected total discounted utility as an objective function. This paper aims to presents a procedure to approximate the solution via machine learning, specifically, a Q-learning technique. The numerical results of the problem are provided.


Author(s):  
Md Mahmudul Hasan ◽  
Md Shahinur Rahman ◽  
Adrian Bell

Deep reinforcement learning (DRL) has transformed the field of artificial intelligence (AI) especially after the success of Google DeepMind. This branch of machine learning epitomizes a step toward building autonomous systems by understanding of the visual world. Deep reinforcement learning (RL) is currently applied to different sorts of problems that were previously obstinate. In this chapter, at first, the authors started with an introduction of the general field of RL and Markov decision process (MDP). Then, they clarified the common DRL framework and the necessary components RL settings. Moreover, they analyzed the stochastic gradient descent (SGD)-based optimizers such as ADAM and a non-specific multi-policy selection mechanism in a multi-objective Markov decision process. In this chapter, the authors also included the comparison for different Deep Q networks. In conclusion, they describe several challenges and trends in research within the deep reinforcement learning field.


2021 ◽  
Vol 10 (2) ◽  
pp. 109
Author(s):  
Ruy Lopez-Rios

The paper deals with a discrete-time consumption investment problem with an infinite horizon. This problem is formulated as a Markov decision process with an expected total discounted utility as an objective function. This paper aims to presents a procedure to approximate the solution via machine learning, specifically, a Q-learning technique. The numerical results of the problem are provided.


2011 ◽  
Vol 2 (4) ◽  
pp. 67-90 ◽  
Author(s):  
Marek Laskowski

Science is on the verge of practical agent based modeling decision support systems capable of machine learning for healthcare policy decision support. The details of integrating an agent based model of a hospital emergency department with a genetic programming machine learning system are presented in this paper. A novel GP heuristic or extension is introduced to better represent the Markov Decision Process that underlies agent decision making in an unknown environment. The capabilities of the resulting prototype for automated hypothesis generation within the context of healthcare policy decision support are demonstrated by automatically generating patient flow and infection spread prevention policies. Finally, some observations are made regarding moving forward from the prototype stage.


Author(s):  
Md Mahmudul Hasan ◽  
Md Shahinur Rahman ◽  
Adrian Bell

Deep reinforcement learning (DRL) has transformed the field of artificial intelligence (AI) especially after the success of Google DeepMind. This branch of machine learning epitomizes a step toward building autonomous systems by understanding of the visual world. Deep reinforcement learning (RL) is currently applied to different sorts of problems that were previously obstinate. In this chapter, at first, the authors started with an introduction of the general field of RL and Markov decision process (MDP). Then, they clarified the common DRL framework and the necessary components RL settings. Moreover, they analyzed the stochastic gradient descent (SGD)-based optimizers such as ADAM and a non-specific multi-policy selection mechanism in a multi-objective Markov decision process. In this chapter, the authors also included the comparison for different Deep Q networks. In conclusion, they describe several challenges and trends in research within the deep reinforcement learning field.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2388 ◽  
Author(s):  
Javier J. Sánchez-Medina ◽  
Juan Antonio Guerra-Montenegro ◽  
David Sánchez-Rodríguez ◽  
Itziar G. Alonso-González ◽  
Juan L. Navarro-Mesa

The Canary Islands are a well known tourist destination with generally stable and clement weather conditions. However, occasionally extreme weather conditions occur, which although very unusual, may cause severe damage to the local economy. The ViMetRi-MAC EU funded project has among its goals, managing climate-change-associated risks. The Spanish National Meteorology Agency (AEMET) has a network of weather stations across the eight Canary Islands. Using data from those stations, we propose a novel methodology for the prediction of maximum wind speed in order to trigger an early alert for extreme weather conditions. The methodology proposed has the added value of using an innovative kind of machine learning that is based on the data stream mining paradigm. This type of machine learning system relies on two important features: models are learned incrementally and adaptively. That means the learner tunes the models gradually and endlessly as new observations are received and also modifies it when there is concept drift (statistical instability), in the modeled phenomenon. The results presented seem to prove that this data stream mining approach is a good fit for this kind of problem, clearly improving the results obtained with the accumulative non-adaptive version of the methodology.


Sign in / Sign up

Export Citation Format

Share Document