scholarly journals The Reserve Price Optimization for Publishers on Real-Time Bidding on-Line Marketplaces with Time-Series Forecasting

2020 ◽  
Vol 12 (1) ◽  
pp. 167-180
Author(s):  
Andrzej Wodecki

AbstractToday's Internet marketing ecosystems are very complex, with many competing players, transactions concluded within milliseconds, and hundreds of different parameters to be analyzed in the decision-making process. In addition, both sellers and buyers operate under uncertainty, without full information about auction results, purchasing preferences, and strategies of their competitors or suppliers. As a result, most market participants strive to optimize their trading strategies using advanced machine learning algorithms. In this publication, we propose a new approach to determining reserve-price strategies for publishers, focusing not only on the profits from individual ad impressions, but also on maximum coverage of advertising space. This strategy combines the heuristics developed by experienced RTB consultants with machine learning forecasting algorithms like ARIMA, SARIMA, Exponential Smoothing, and Facebook Prophet. The paper analyses the effectiveness of these algorithms, recommends the best one, and presents its implementation in real environment. As such, its results may form a basis for a competitive advantage for publishers on very demanding online advertising markets.

2022 ◽  
Vol 301 ◽  
pp. 113868
Author(s):  
Xuan Cuong Nguyen ◽  
Thi Thanh Huyen Nguyen ◽  
Quyet V. Le ◽  
Phuoc Cuong Le ◽  
Arun Lal Srivastav ◽  
...  

2021 ◽  
Vol 13 (3) ◽  
pp. 23-34
Author(s):  
Chandrakant D. Patel ◽  
◽  
Jayesh M. Patel

With the large quantity of information offered on-line, it's equally essential to retrieve correct information for a user query. A large amount of data is available in digital form in multiple languages. The various approaches want to increase the effectiveness of on-line information retrieval but the standard approach tries to retrieve information for a user query is to go looking at the documents within the corpus as a word by word for the given query. This approach is incredibly time intensive and it's going to miss several connected documents that are equally important. So, to avoid these issues, stemming has been extensively utilized in numerous Information Retrieval Systems (IRS) to extend the retrieval accuracy of all languages. These papers go through the problem of stemming with Web Page Categorization on Gujarati language which basically derived the stem words using GUJSTER algorithms [1]. The GUJSTER algorithm is based on morphological rules which is used to derived root or stem word from inflected words of the same class. In particular, we consider the influence of extracted a stem or root word, to check the integrity of the web page classification using supervised machine learning algorithms. This research work is intended to focus on the analysis of Web Page Categorization (WPC) of Gujarati language and concentrate on a research problem to do verify the influence of a stemming algorithm in a WPC application for the Gujarati language with improved accuracy between from 63% to 98% through Machine Learning supervised models with standard ratio 80% as training and 20% as testing.


Author(s):  
Satwik P M and Dr. Meenatchi Sundram

In this Research article, we presented a new approach for predicting the flood through the advanced Machine learning Algorithm which is one among the Neural networks class that outperforms itself in best data operations and predictive analytics. This Research article discusses in detail about the prediction of flood occurrences evaluation process. We interpreted the Research with many algorithms that is existing, and the Research work have been dealing with different research works inculcated and compared with different Research approaches. On Comparing to the Previous Researches its observed that the Neural Turing networks have been performing the prediction of the rainfall and flood-based disasters for the consecutive year counts of 10,15 and 20 with 93.8% accuracy. Here the Research is analyzed with various parameters and Comparing it with the other researches which is implemented with other machine learning algorithms. Comparing with the previous researches the Idea of the research have been described and evaluated with the different evaluation parameters including the number of iterations or Epochs.


Author(s):  
Francesco Ferrati ◽  
Moreno Muffatto

In order to support equity investors in their decision-making process, researchers are exploring the potential of machine learning algorithms to predict the financial success of startup ventures. In this context, a key role is played by the significance of the data used, which should reflect most of the variables considered by investors in their screening and evaluation activity. This paper provides a detailed description of the data management process that can be followed to obtain such a dataset. Using Crunchbase as the main data source, other databases have been integrated to enrich the information content and support the feature engineering process. Specifically, the following sources has been considered: USPTO PatentsView, Kauffman Indicators of Entrepreneurship, Academic Ranking of World Universities, CB Insights ranking of top-investors. The final dataset contains the profiles of 138,637 US-based ventures founded between 2000 and 2019. For each company the elements assessed by equity investors have been analyzed. Among others, the following specific areas were considered for each company: location, industry, founding team, intellectual property and funding round history. Data related to each area have been formalized in a series of features ready to be used in a machine learning context.


Author(s):  
Emil Sauter ◽  
Erkut Sarikaya ◽  
Marius Winter ◽  
Konrad Wegener

AbstractThe improvement of industrial grinding processes is driven by the objective to reduce process time and costs while maintaining required workpiece quality characteristics. One of several limiting factors is grinding burn. Usually applied techniques for workpiece burn are conducted often only for selected parts and can be time consuming. This study presents a new approach for grinding burn detection realized for each ground part under near-production conditions. Based on the in-process measurement of acoustic emission, spindle electric current, and power signals, time-frequency transforms are conducted to derive almost 900 statistical features as an input for machine learning algorithms. Using genetic programming, an optimized combination between feature selector and classifier is determined to detect grinding burn. The application of the approach results in a high classification accuracy of about 99% for the binary problem and more than 98% for the multi-classdetection case, respectively.


Author(s):  
Alja Videtič Paska ◽  
Katarina Kouter

In psychiatry, compared to other medical fields, the identification of biological markers that would complement current clinical interview, and enable more objective and faster clinical diagnosis, implement accurate monitoring of treatment response and remission, is grave. Current technological development enables analyses of various biological marks in high throughput scale at reasonable costs, and therefore ‘omic’ studies are entering the psychiatry research. However, big data demands a whole new plethora of skills in data processing, before clinically useful information can be extracted. So far the classical approach to data analysis did not really contribute to identification of biomarkers in psychiatry, but the extensive amounts of data might get to a higher level, if artificial intelligence in the shape of machine learning algorithms would be applied. Not many studies on machine learning in psychiatry have been published, but we can already see from that handful of studies that the potential to build a screening portfolio of biomarkers for different psychopathologies, including suicide, exists.


2020 ◽  
pp. 373-379
Author(s):  
Chris Edwards ◽  
Mark Gaved

As higher education institutions increasingly teach online and offer greater levels of choice to students (over which modules to study, in which order to study, and how long to extend study before qualification) new challenges are introduced. One of these challenges is how to maintain an understanding of the student experience. This understanding is necessary to provide feedback to both students and faculty, and institutionally for the continued enhancement of quality. This paper is the first attempt at providing a narrative describing one approach to this challenge and the experience within a large distance learning University. It demonstrates a new approach to data is key to enabling the analysis of student study pathways. For many years, this University has offered great flexibility of study and as wide a study choice as it is possible to offer with conventional modules. By design, the Institution holds high levels of data for all student study. However, whilst it is possible to create bespoke queries, we found that this has been insufficient to readily enable analysis of the student experience. By moving from a traditional relational database structure to a multi-model database, many of the difficulties are resolved. In this paper, we report on this approach and describe next steps, including the potential to apply machine learning algorithms and test other data theories like that of Markov Chains.


Sign in / Sign up

Export Citation Format

Share Document