scholarly journals Restaurant Sales Prediction Using Machine Learning

2021 ◽  
Author(s):  
S M Nazmuz Sakib

In general, the revenue forecast, offer information, and the weather gauge setting will record an accurate estimate of any restaurant's future revenue. The turnover is significantly focused on the need of the customers. Either way, the performance has transformed over the past couple of years with the presentation of huge amounts of information and calculations during the time taken to gain the upper hand. It is fundamental to learn and understand the importance of the information that will be used in any business process. Again, climate forecasting can be done alongside business expectations with the organization.

2019 ◽  
Vol 19 (1) ◽  
pp. 4-16 ◽  
Author(s):  
Qihui Wu ◽  
Hanzhong Ke ◽  
Dongli Li ◽  
Qi Wang ◽  
Jiansong Fang ◽  
...  

Over the past decades, peptide as a therapeutic candidate has received increasing attention in drug discovery, especially for antimicrobial peptides (AMPs), anticancer peptides (ACPs) and antiinflammatory peptides (AIPs). It is considered that the peptides can regulate various complex diseases which are previously untouchable. In recent years, the critical problem of antimicrobial resistance drives the pharmaceutical industry to look for new therapeutic agents. Compared to organic small drugs, peptide- based therapy exhibits high specificity and minimal toxicity. Thus, peptides are widely recruited in the design and discovery of new potent drugs. Currently, large-scale screening of peptide activity with traditional approaches is costly, time-consuming and labor-intensive. Hence, in silico methods, mainly machine learning approaches, for their accuracy and effectiveness, have been introduced to predict the peptide activity. In this review, we document the recent progress in machine learning-based prediction of peptides which will be of great benefit to the discovery of potential active AMPs, ACPs and AIPs.


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 300
Author(s):  
Mark Lokanan ◽  
Susan Liu

Protecting financial consumers from investment fraud has been a recurring problem in Canada. The purpose of this paper is to predict the demographic characteristics of investors who are likely to be victims of investment fraud. Data for this paper came from the Investment Industry Regulatory Organization of Canada’s (IIROC) database between January of 2009 and December of 2019. In total, 4575 investors were coded as victims of investment fraud. The study employed a machine-learning algorithm to predict the probability of fraud victimization. The machine learning model deployed in this paper predicted the typical demographic profile of fraud victims as investors who classify as female, have poor financial knowledge, know the advisor from the past, and are retired. Investors who are characterized as having limited financial literacy but a long-time relationship with their advisor have reduced probabilities of being victimized. However, male investors with low or moderate-level investment knowledge were more likely to be preyed upon by their investment advisors. While not statistically significant, older adults, in general, are at greater risk of being victimized. The findings from this paper can be used by Canadian self-regulatory organizations and securities commissions to inform their investors’ protection mandates.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2020 ◽  
Vol 24 (6) ◽  
pp. 1311-1328
Author(s):  
Jozsef Suto

Nowadays there are hundreds of thousands known plant species on the Earth and many are still unknown yet. The process of plant classification can be performed using different ways but the most popular approach is based on plant leaf characteristics. Most types of plants have unique leaf characteristics such as shape, color, and texture. Since machine learning and vision considerably developed in the past decade, automatic plant species (or leaf) recognition has become possible. Recently, the automated leaf classification is a standalone research area inside machine learning and several shallow and deep methods were proposed to recognize leaf types. From 2007 to present days several research papers have been published in this topic. In older studies the classifier was a shallow method while in current works many researchers applied deep networks for classification. During the overview of plant leaf classification literature, we found an interesting deficiency (lack of hyper-parameter search) and a key difference between studies (different test sets). This work gives an overall review about the efficiency of shallow and deep methods under different test conditions. It can be a basis to further research.


Symmetry ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1116
Author(s):  
Zeba Mahmood ◽  
Vacius Jusas

This paper introduces a blockchain-based federated learning (FL) framework with incentives for participating nodes to enhance the accuracy of classification problems. Machine learning technology has been rapidly developed and changed from a global perspective for the past few years. The FL framework is based on the Ethereum blockchain and creates an autonomous ecosystem, where nodes compete to improve the accuracy of classification problems. With privacy being one of the biggest concerns, FL makes use of the blockchain-based approach to ensure privacy and security. Another important technology that underlies the FL framework is zero-knowledge proofs (ZKPs), which ensure that data uploaded to the network are accurate and private. Basically, ZKPs allow nodes to compete fairly by only submitting accurate models to the parameter server and get rewarded for that. We have conducted an analysis and found that ZKPs can help improve the accuracy of models submitted to the parameter server and facilitate the honest participation of all nodes in FL.


Risks ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 4 ◽  
Author(s):  
Christopher Blier-Wong ◽  
Hélène Cossette ◽  
Luc Lamontagne ◽  
Etienne Marceau

In the past 25 years, computer scientists and statisticians developed machine learning algorithms capable of modeling highly nonlinear transformations and interactions of input features. While actuaries use GLMs frequently in practice, only in the past few years have they begun studying these newer algorithms to tackle insurance-related tasks. In this work, we aim to review the applications of machine learning to the actuarial science field and present the current state of the art in ratemaking and reserving. We first give an overview of neural networks, then briefly outline applications of machine learning algorithms in actuarial science tasks. Finally, we summarize the future trends of machine learning for the insurance industry.


2021 ◽  
Vol 37 (3) ◽  
pp. 585-617
Author(s):  
Teresa Bono ◽  
Karen Croxson ◽  
Adam Giles

Abstract The use of machine learning as an input into decision-making is on the rise, owing to its ability to uncover hidden patterns in large data and improve prediction accuracy. Questions have been raised, however, about the potential distributional impacts of these technologies, with one concern being that they may perpetuate or even amplify human biases from the past. Exploiting detailed credit file data for 800,000 UK borrowers, we simulate a switch from a traditional (logit) credit scoring model to ensemble machine-learning methods. We confirm that machine-learning models are more accurate overall. We also find that they do as well as the simpler traditional model on relevant fairness criteria, where these criteria pertain to overall accuracy and error rates for population subgroups defined along protected or sensitive lines (gender, race, health status, and deprivation). We do observe some differences in the way credit-scoring models perform for different subgroups, but these manifest under a traditional modelling approach and switching to machine learning neither exacerbates nor eliminates these issues. The paper discusses some of the mechanical and data factors that may contribute to statistical fairness issues in the context of credit scoring.


Sign in / Sign up

Export Citation Format

Share Document