online learning algorithm
Recently Published Documents


TOTAL DOCUMENTS

113
(FIVE YEARS 35)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 14 (2) ◽  
pp. 99-106
Author(s):  
Asima Sarkar ◽  

For about last two years, the whole world is suffering from a novel disease i.e. Covid-19. When it was first diagnosed in China, even the giant health agencies could not predict the severity and spread of this disease. Slowly, when this novel corona virus had an outbreak the countries stopped all kinds of communication be it interstate or intercountry and so the tourism companies started facing huge loss due to lockdown in every single country. In this paper, the stock prices of the multinational tourism companies that operate in India, have been forecasted and using an online learning algorithm known as Gated Recurrent Unit (GRU). As we know that predicting stock prices is not an easy task to do, it requires extensive study of the stock market and intervention of statistical and machine learning models. We will try to spot whether the forecasting before pandemic is better than the forecasting during the pandemic for each of the six leading multinational tourism companies.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-27
Author(s):  
Angello Astorga ◽  
Shambwaditya Saha ◽  
Ahmad Dinkins ◽  
Felicia Wang ◽  
P. Madhusudan ◽  
...  

We present an approach to learn contracts for object-oriented programs where guarantees of correctness of the contracts are made with respect to a test generator. Our contract synthesis approach is based on a novel notion of tight contracts and an online learning algorithm that works in tandem with a test generator to synthesize tight contracts. We implement our approach in a tool called Precis and evaluate it on a suite of programs written in C#, studying the safety and strength of the synthesized contracts, and compare them to those synthesized by Daikon.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2308
Author(s):  
Wan Nur Suryani Firuz Wan Wan Ariffin ◽  
Xinruo Zhang ◽  
Mohammad Reza Nakhai ◽  
Hasliza A. Rahim ◽  
R. Badlishah Ahmad

Constantly changing electricity demand has made variability and uncertainty inherent characteristics of both electric generation and cellular communication systems. This paper develops an online learning algorithm as a prescheduling mechanism to manage the variability and uncertainty to maintain cost-aware and reliable operation in cloud radio access networks (Cloud-RANs). The proposed algorithm employs a combinatorial multi-armed bandit model and minimizes the long-term energy cost at remote radio heads. The algorithm preschedules a set of cost-efficient energy packages to be purchased from an ancillary energy market for the future time slots by learning both from cooperative energy trading at previous time slots and by exploring new energy scheduling strategies at the current time slot. The simulation results confirm a significant performance gain of the proposed scheme in controlling the available power budgets and minimizing the overall energy cost compared with recently proposed approaches for real-time energy resources and energy trading in Cloud-RANs.


2021 ◽  
Author(s):  
Giovanni Di Gennaro ◽  
Amedeo Buonanno ◽  
Francesco A. N. Palmieri

AbstractBayesian networks in their Factor Graph Reduced Normal Form are a powerful paradigm for implementing inference graphs. Unfortunately, the computational and memory costs of these networks may be considerable even for relatively small networks, and this is one of the main reasons why these structures have often been underused in practice. In this work, through a detailed algorithmic and structural analysis, various solutions for cost reduction are proposed. Moreover, an online version of the classic batch learning algorithm is also analysed, showing very similar results in an unsupervised context but with much better performance; which may be essential if multi-level structures are to be built. The solutions proposed, together with the possible online learning algorithm, are included in a C++ library that is quite efficient, especially if compared to the direct use of the well-known sum-product and Maximum Likelihood algorithms. The results obtained are discussed with particular reference to a Latent Variable Model structure.


2021 ◽  
Vol 11 (5) ◽  
pp. 2059
Author(s):  
Sungmin Hwang ◽  
Hyungjin Kim ◽  
Byung-Gook Park

A hardware-based spiking neural network (SNN) has attracted many researcher’s attention due to its energy-efficiency. When implementing the hardware-based SNN, offline training is most commonly used by which trained weights by a software-based artificial neural network (ANN) are transferred to synaptic devices. However, it is time-consuming to map all the synaptic weights as the scale of the neural network increases. In this paper, we propose a method for quantized weight transfer using spike-timing-dependent plasticity (STDP) for hardware-based SNN. STDP is an online learning algorithm for SNN, but we utilize it as the weight transfer method. Firstly, we train SNN using the Modified National Institute of Standards and Technology (MNIST) dataset and perform weight quantization. Next, the quantized weights are mapped to the synaptic devices using STDP, by which all the synaptic weights connected to a neuron are transferred simultaneously, reducing the number of pulse steps. The performance of the proposed method is confirmed, and it is demonstrated that there is little reduction in the accuracy at more than a certain level of quantization, but the number of pulse steps for weight transfer substantially decreased. In addition, the effect of the device variation is verified.


Sign in / Sign up

Export Citation Format

Share Document