scholarly journals Detecting Anomalous Transactions via an IoT Based Application: A Machine Learning Approach for Horse Racing Betting

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2039
Author(s):  
Moohong Min ◽  
Jemin Justin Lee ◽  
Hyunbeom Park ◽  
Kyungho Lee

During the past decade, the technological advancement have allowed the gambling industry worldwide to deploy various platforms such as the web and mobile applications. Government agencies and local authorities have place strict regulations regarding the location and amount allowed for gambling. These efforts are made to prevent gambling addictions and monitor fraudulent activities. The revenue earned from gambling provides a considerable amount of tax revenue. The inception of internet gambling have allowed professional gamblers to par take in unlawful acts. However, the lack of studies on the technical inspections and systems to prohibit unlawful internet gambling has caused incidents such as the Walkerhill Hotel incident in 2016, where fraudsters placed bets abnormally by modifying an Internet of Things (IoT)-based application called “MyCard”. This paper investigates the logic used by smartphone IoT applications to validate the location of users and then confirm continuous threats. Hence, our research analyzed transactions made on applications that operated using location authentication through IoT devices. Drawing on gambling transaction data from the Korea Racing Authority, this research used time series machine learning algorithms to identify anomalous activities and transactions. In our research, we propose a method to detect and prevent these anomalies by conducting a comparative analysis of the results of existing anomaly detection techniques and novel techniques.

2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
John Foley ◽  
Naghmeh Moradpoor ◽  
Henry Ochenyi

One of the important features of routing protocol for low-power and lossy networks (RPLs) is objective function (OF). OF influences an IoT network in terms of routing strategies and network topology. On the contrary, detecting a combination of attacks against OFs is a cutting-edge technology that will become a necessity as next generation low-power wireless networks continue to be exploited as they grow rapidly. However, current literature lacks study on vulnerability analysis of OFs particularly in terms of combined attacks. Furthermore, machine learning is a promising solution for the global networks of IoT devices in terms of analysing their ever-growing generated data and predicting cyberattacks against such devices. Therefore, in this paper, we study the vulnerability analysis of two popular OFs of RPL to detect combined attacks against them using machine learning algorithms through different simulated scenarios. For this, we created a novel IoT dataset based on power and network metrics, which is deployed as part of an RPL IDS/IPS solution to enhance information security. Addressing the captured results, our machine learning approach is successful in detecting combined attacks against two popular OFs of RPL based on the power and network metrics in which MLP and RF algorithms are the most successful classifier deployment for single and ensemble models.


2021 ◽  
Vol 309 ◽  
pp. 01024
Author(s):  
M. Sri Vidya ◽  
G. R. Sakthidharan

Internet of Things connects various physical objects and form a network to do the services for sensing the physical things without any human intervention. They compute the data, retrieve the data by the network connections made through IoT device components such as Sensors, Protocols, Address, etc., The Global Positioning System (GPS) is used for localization in outer areas such as roads, and ground but cannot be used for Indoor environment. So, while using Indoor Environment, finding or locating an object is not possible by GPS. Therefore by using IoT devices such as Wi-Fi routers in Indoor Environment can localize the objects. It can be done by using Received Signal Strengths (RSSs) from a Wi-Fi router. But by using RSSs in Wi-Fi, there are disturbances, reflections, interferences are caused. By using Outlier detection techniques for localization can identify the objects clearly without any interruptions, noises, and irregular signal strengths. This paper produces research about Indoor Situating Environment and various techniques already used for localization and form the effective solution. The several methods used are compared and form a result to make the further computation in the Indoor Environment. The Comparison is done in order to find the effective and more accurate Machine Learning algorithms used for Indoor Localization.


2021 ◽  
Vol 9 (5) ◽  
pp. 1034
Author(s):  
Carlos Sabater ◽  
Lorena Ruiz ◽  
Abelardo Margolles

This study aimed to recover metagenome-assembled genomes (MAGs) from human fecal samples to characterize the glycosidase profiles of Bifidobacterium species exposed to different prebiotic oligosaccharides (galacto-oligosaccharides, fructo-oligosaccharides and human milk oligosaccharides, HMOs) as well as high-fiber diets. A total of 1806 MAGs were recovered from 487 infant and adult metagenomes. Unsupervised and supervised classification of glycosidases codified in MAGs using machine-learning algorithms allowed establishing characteristic hydrolytic profiles for B. adolescentis, B. bifidum, B. breve, B. longum and B. pseudocatenulatum, yielding classification rates above 90%. Glycosidase families GH5 44, GH32, and GH110 were characteristic of B. bifidum. The presence or absence of GH1, GH2, GH5 and GH20 was characteristic of B. adolescentis, B. breve and B. pseudocatenulatum, while families GH1 and GH30 were relevant in MAGs from B. longum. These characteristic profiles allowed discriminating bifidobacteria regardless of prebiotic exposure. Correlation analysis of glycosidase activities suggests strong associations between glycosidase families comprising HMOs-degrading enzymes, which are often found in MAGs from the same species. Mathematical models here proposed may contribute to a better understanding of the carbohydrate metabolism of some common bifidobacteria species and could be extrapolated to other microorganisms of interest in future studies.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 656
Author(s):  
Xavier Larriva-Novo ◽  
Víctor A. Villagrá ◽  
Mario Vega-Barbas ◽  
Diego Rivera ◽  
Mario Sanz Rodrigo

Security in IoT networks is currently mandatory, due to the high amount of data that has to be handled. These systems are vulnerable to several cybersecurity attacks, which are increasing in number and sophistication. Due to this reason, new intrusion detection techniques have to be developed, being as accurate as possible for these scenarios. Intrusion detection systems based on machine learning algorithms have already shown a high performance in terms of accuracy. This research proposes the study and evaluation of several preprocessing techniques based on traffic categorization for a machine learning neural network algorithm. This research uses for its evaluation two benchmark datasets, namely UGR16 and the UNSW-NB15, and one of the most used datasets, KDD99. The preprocessing techniques were evaluated in accordance with scalar and normalization functions. All of these preprocessing models were applied through different sets of characteristics based on a categorization composed by four groups of features: basic connection features, content characteristics, statistical characteristics and finally, a group which is composed by traffic-based features and connection direction-based traffic characteristics. The objective of this research is to evaluate this categorization by using various data preprocessing techniques to obtain the most accurate model. Our proposal shows that, by applying the categorization of network traffic and several preprocessing techniques, the accuracy can be enhanced by up to 45%. The preprocessing of a specific group of characteristics allows for greater accuracy, allowing the machine learning algorithm to correctly classify these parameters related to possible attacks.


2021 ◽  
Vol 27 ◽  
pp. 107602962199118
Author(s):  
Logan Ryan ◽  
Samson Mataraso ◽  
Anna Siefkas ◽  
Emily Pellegrini ◽  
Gina Barnes ◽  
...  

Deep venous thrombosis (DVT) is associated with significant morbidity, mortality, and increased healthcare costs. Standard scoring systems for DVT risk stratification often provide insufficient stratification of hospitalized patients and are unable to accurately predict which inpatients are most likely to present with DVT. There is a continued need for tools which can predict DVT in hospitalized patients. We performed a retrospective study on a database collected from a large academic hospital, comprised of 99,237 total general ward or ICU patients, 2,378 of whom experienced a DVT during their hospital stay. Gradient boosted machine learning algorithms were developed to predict a patient’s risk of developing DVT at 12- and 24-hour windows prior to onset. The primary outcome of interest was diagnosis of in-hospital DVT. The machine learning predictors obtained AUROCs of 0.83 and 0.85 for DVT risk prediction on hospitalized patients at 12- and 24-hour windows, respectively. At both 12 and 24 hours before DVT onset, the most important features for prediction of DVT were cancer history, VTE history, and internal normalized ratio (INR). Improved risk stratification may prevent unnecessary invasive testing in patients for whom DVT cannot be ruled out using existing methods. Improved risk stratification may also allow for more targeted use of prophylactic anticoagulants, as well as earlier diagnosis and treatment, preventing the development of pulmonary emboli and other sequelae of DVT.


Risks ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 4 ◽  
Author(s):  
Christopher Blier-Wong ◽  
Hélène Cossette ◽  
Luc Lamontagne ◽  
Etienne Marceau

In the past 25 years, computer scientists and statisticians developed machine learning algorithms capable of modeling highly nonlinear transformations and interactions of input features. While actuaries use GLMs frequently in practice, only in the past few years have they begun studying these newer algorithms to tackle insurance-related tasks. In this work, we aim to review the applications of machine learning to the actuarial science field and present the current state of the art in ratemaking and reserving. We first give an overview of neural networks, then briefly outline applications of machine learning algorithms in actuarial science tasks. Finally, we summarize the future trends of machine learning for the insurance industry.


Author(s):  
Adwait Patil

Abstract: Alzheimer’s disease is one of the neurodegenerative disorders. It initially starts with innocuous symptoms but gradually becomes severe. This disease is so dangerous because there is no treatment, the disease is detected but typically at a later stage. So it is important to detect Alzheimer at an early stage to counter the disease and for a probable recovery for the patient. There are various approaches currently used to detect symptoms of Alzheimer’s disease (AD) at an early stage. The fuzzy system approach is not widely used as it heavily depends on expert knowledge but is quite efficient in detecting AD as it provides a mathematical foundation for interpreting the human cognitive processes. Another more accurate and widely accepted approach is the machine learning detection of AD stages which uses machine learning algorithms like Support Vector Machines (SVMs) , Decision Tree , Random Forests to detect the stage depending on the data provided. The final approach is the Deep Learning approach using multi-modal data that combines image , genetic data and patient data using deep models and then uses the concatenated data to detect the AD stage more efficiently; this method is obscure as it requires huge volumes of data. This paper elaborates on all the three approaches and provides a comparative study about them and which method is more efficient for AD detection. Keywords: Alzheimer’s Disease (AD), Fuzzy System , Machine Learning , Deep Learning , Multimodal data


2021 ◽  
Author(s):  
Marian Popescu ◽  
Rebecca Head ◽  
Tim Ferriday ◽  
Kate Evans ◽  
Jose Montero ◽  
...  

Abstract This paper presents advancements in machine learning and cloud deployment that enable rapid and accurate automated lithology interpretation. A supervised machine learning technique is described that enables rapid, consistent, and accurate lithology prediction alongside quantitative uncertainty from large wireline or logging-while-drilling (LWD) datasets. To leverage supervised machine learning, a team of geoscientists and petrophysicists made detailed lithology interpretations of wells to generate a comprehensive training dataset. Lithology interpretations were based on applying determinist cross-plotting by utilizing and combining various raw logs. This training dataset was used to develop a model and test a machine learning pipeline. The pipeline was applied to a dataset previously unseen by the algorithm, to predict lithology. A quality checking process was performed by a petrophysicist to validate new predictions delivered by the pipeline against human interpretations. Confidence in the interpretations was assessed in two ways. The prior probability was calculated, a measure of confidence in the input data being recognized by the model. Posterior probability was calculated, which quantifies the likelihood that a specified depth interval comprises a given lithology. The supervised machine learning algorithm ensured that the wells were interpreted consistently by removing interpreter biases and inconsistencies. The scalability of cloud computing enabled a large log dataset to be interpreted rapidly; >100 wells were interpreted consistently in five minutes, yielding >70% lithological match to the human petrophysical interpretation. Supervised machine learning methods have strong potential for classifying lithology from log data because: 1) they can automatically define complex, non-parametric, multi-variate relationships across several input logs; and 2) they allow classifications to be quantified confidently. Furthermore, this approach captured the knowledge and nuances of an interpreter's decisions by training the algorithm using human-interpreted labels. In the hydrocarbon industry, the quantity of generated data is predicted to increase by >300% between 2018 and 2023 (IDC, Worldwide Global DataSphere Forecast, 2019–2023). Additionally, the industry holds vast legacy data. This supervised machine learning approach can unlock the potential of some of these datasets by providing consistent lithology interpretations rapidly, allowing resources to be used more effectively.


2021 ◽  
Author(s):  
Jack Woollam ◽  
Jannes Münchmeyer ◽  
Carlo Giunchi ◽  
Dario Jozinovic ◽  
Tobias Diehl ◽  
...  

<p>Machine learning methods have seen widespread adoption within the seismological community in recent years due to their ability to effectively process large amounts of data, while equalling or surpassing the performance of human analysts or classic algorithms. In the wider machine learning world, for example in imaging applications, the open availability of extensive high-quality datasets for training, validation, and the benchmarking of competing algorithms is seen as a vital ingredient to the rapid progress observed throughout the last decade. Within seismology, vast catalogues of labelled data are readily available, but collecting the waveform data for millions of records and assessing the quality of training examples is a time-consuming, tedious process. The natural variability in source processes and seismic wave propagation also presents a critical problem during training. The performance of models trained on different regions, distance and magnitude ranges are not easily comparable. The inability to easily compare and contrast state-of-the-art machine learning-based detection techniques on varying seismic data sets is currently a barrier to further progress within this emerging field. We present SeisBench, an extensible open-source framework for training, benchmarking, and applying machine learning algorithms. SeisBench provides access to various benchmark data sets and models from literature, along with pre-trained model weights, through a unified API. Built to be extensible, and modular, SeisBench allows for the simple addition of new models and data sets, which can be easily interchanged with existing pre-trained models and benchmark data. Standardising the access of varying quality data, and metadata simplifies comparison workflows, enabling the development of more robust machine learning algorithms. We initially focus on phase detection, identification and picking, but the framework is designed to be extended for other purposes, for example direct estimation of event parameters. Users will be able to contribute their own benchmarks and (trained) models. In the future, it will thus be much easier to compare both the performance of new algorithms against published machine learning models/architectures and to check the performance of established algorithms against new data sets. We hope that the ease of validation and inter-model comparison enabled by SeisBench will serve as a catalyst for the development of the next generation of machine learning techniques within the seismological community. The SeisBench source code will be published with an open license and explicitly encourages community involvement.</p>


Sign in / Sign up

Export Citation Format

Share Document