Machine learning in men’s professional football: Current applications and future directions for improving attacking play

2019 ◽  
Vol 14 (6) ◽  
pp. 798-817
Author(s):  
Mat Herold ◽  
Floris Goes ◽  
Stephan Nopp ◽  
Pascal Bauer ◽  
Chris Thompson ◽  
...  

It is common practice amongst coaches and analysts to search for key performance indicators related to attacking play in football. Match analysis in professional football has predominately utilised notational analysis, a statistical summary of events based on video footage, to study the sport and prepare teams for competition. Recent increases in technology have facilitated the dynamic analysis of more complex process variables, giving practitioners the potential to quickly evaluate a match with consideration to contextual parameters. One field of research, known as machine learning, is a form of artificial intelligence that uses algorithms to detect meaningful patterns based on positional data. Machine learning is a relatively new concept in football, and little is known about its usefulness in identifying performance metrics that determine match outcome. Few studies and no reviews have focused on the use of machine learning to improve tactical knowledge and performance, instead focusing on the models used, or as a prediction method. Accordingly, this article provides a critical appraisal of the application of machine learning in football related to attacking play, discussing current challenges and future directions that may provide deeper insight to practitioners.

With the rapid development of artificial intelligence, various machine learning algorithms have been widely used in the task of football match result prediction and have achieved certain results. However, traditional machine learning methods usually upload the results of previous competitions to the cloud server in a centralized manner, which brings problems such as network congestion, server computing pressure and computing delay. This paper proposes a football match result prediction method based on edge computing and machine learning technology. Specifically, we first extract some game data from the results of the previous games to construct the common features and characteristic features, respectively. Then, the feature extraction and classification task are deployed to multiple edge nodes.Finally, the results in all the edge nodes are uploaded to the cloud server and fused to make a decision. Experimental results have demonstrated the effectiveness of the proposed method.


Geosciences ◽  
2019 ◽  
Vol 9 (12) ◽  
pp. 504
Author(s):  
Josephine Morgenroth ◽  
Usman T. Khan ◽  
Matthew A. Perras

Machine learning methods for data processing are gaining momentum in many geoscience industries. This includes the mining industry, where machine learning is primarily being applied to autonomously driven vehicles such as haul trucks, and ore body and resource delineation. However, the development of machine learning applications in rock engineering literature is relatively recent, despite being widely used and generally accepted for decades in other risk assessment-type design areas, such as flood forecasting. Operating mines and underground infrastructure projects collect more instrumentation data than ever before, however, only a small fraction of the useful information is typically extracted for rock engineering design, and there is often insufficient time to investigate complex rock mass phenomena in detail. This paper presents a summary of current practice in rock engineering design, as well as a review of literature and methods at the intersection of machine learning and rock engineering. It identifies gaps, such as standards for architecture, input selection and performance metrics, and areas for future work. These gaps present an opportunity to define a framework for integrating machine learning into conventional rock engineering design methodologies to make them more rigorous and reliable in predicting probable underlying physical mechanics and phenomenon.


2022 ◽  
Author(s):  
Zhiheng Zhong ◽  
Minxian Xu ◽  
Maria Alejandra Rodriguez ◽  
Chengzhong Xu ◽  
Rajkumar Buyya

Containerization is a lightweight application virtualization technology, providing high environmental consistency, operating system distribution portability, and resource isolation. Existing mainstream cloud service providers have prevalently adopted container technologies in their distributed system infrastructures for automated application management. To handle the automation of deployment, maintenance, autoscaling, and networking of containerized applications, container orchestration is proposed as an essential research problem. However, the highly dynamic and diverse feature of cloud workloads and environments considerably raises the complexity of orchestration mechanisms. Machine learning algorithms are accordingly employed by container orchestration systems for behavior modelling and prediction of multi-dimensional performance metrics. Such insights could further improve the quality of resource provisioning decisions in response to the changing workloads under complex environments. In this paper, we present a comprehensive literature review of existing machine learning-based container orchestration approaches. Detailed taxonomies are proposed to classify the current researches by their common features. Moreover, the evolution of machine learning-based container orchestration technologies from the year 2016 to 2021 has been designed based on objectives and metrics. A comparative analysis of the reviewed techniques is conducted according to the proposed taxonomies, with emphasis on their key characteristics. Finally, various open research challenges and potential future directions are highlighted.


Author(s):  
Tarik Alafif ◽  
Abdul Muneeim Tehame ◽  
Saleh Bajaba ◽  
Ahmed Barnawi ◽  
Saad Zia

With many successful stories, machine learning (ML) and deep learning (DL) have been widely used in our everyday lives in a number of ways. They have also been instrumental in tackling the outbreak of Coronavirus (COVID-19), which has been happening around the world. The SARS-CoV-2 virus-induced COVID-19 epidemic has spread rapidly across the world, leading to international outbreaks. The COVID-19 fight to curb the spread of the disease involves most states, companies, and scientific research institutions. In this research, we look at the Artificial Intelligence (AI)-based ML and DL methods for COVID-19 diagnosis and treatment. Furthermore, in the battle against COVID-19, we summarize the AI-based ML and DL methods and the available datasets, tools, and performance. This survey offers a detailed overview of the existing state-of-the-art methodologies for ML and DL researchers and the wider health community with descriptions of how ML and DL and data can improve the status of COVID-19, and more studies in order to avoid the outbreak of COVID-19. Details of challenges and future directions are also provided.


Author(s):  
Yousef O. Sharrab ◽  
Mohammad Alsmirat ◽  
Bilal Hawashin ◽  
Nabil Sarhan

Advancement of the prediction models used in a variety of fields is a result of the contribution of machine learning approaches. Utilizing such modeling in feature engineering is exceptionally imperative and required. In this research, we show how to utilize machine learning to save time in research experiments, where we save more than five thousand hours of measuring the energy consumption of encoding recordings. Since measuring the energy consumption has got to be done by humans and since we require more than eleven thousand experiments to cover all the combinations of video sequences, video bit_rate, and video encoding settings, we utilize machine learning to model the energy consumption utilizing linear regression. VP8 codec has been offered by Google as an open video encoder in an effort to replace the popular MPEG-4 Part 10, known as H.264/AVC video encoder standard. This research model energy consumption and describes the major differences between H.264/AVC and VP8 encoders in terms of energy consumption and performance through experiments that are based on machine learning modeling. Twenty-nine raw video sequences are used, offering a wide range of resolutions and contents, with the frame sizes ranging from QCIF(176x144) to 2160p(3840x2160). For fairness in comparison analysis, we use seven settings in VP8 encoder and fifteen types of tuning in H.264/AVC. The settings cover various video qualities. The performance metrics include video qualities, encoding time, and encoding energy consumption.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-47
Author(s):  
Bushra Sabir ◽  
Faheem Ullah ◽  
M. Ali Babar ◽  
Raj Gaire

Context : Research at the intersection of cybersecurity, Machine Learning (ML), and Software Engineering (SE) has recently taken significant steps in proposing countermeasures for detecting sophisticated data exfiltration attacks. It is important to systematically review and synthesize the ML-based data exfiltration countermeasures for building a body of knowledge on this important topic. Objective : This article aims at systematically reviewing ML-based data exfiltration countermeasures to identify and classify ML approaches, feature engineering techniques, evaluation datasets, and performance metrics used for these countermeasures. This review also aims at identifying gaps in research on ML-based data exfiltration countermeasures. Method : We used Systematic Literature Review (SLR) method to select and review 92 papers. Results : The review has enabled us to: (a) classify the ML approaches used in the countermeasures into data-driven, and behavior-driven approaches; (b) categorize features into six types: behavioral, content-based, statistical, syntactical, spatial, and temporal; (c) classify the evaluation datasets into simulated, synthesized, and real datasets; and (d) identify 11 performance measures used by these studies. Conclusion : We conclude that: (i) The integration of data-driven and behavior-driven approaches should be explored; (ii) There is a need of developing high quality and large size evaluation datasets; (iii) Incremental ML model training should be incorporated in countermeasures; (iv) Resilience to adversarial learning should be considered and explored during the development of countermeasures to avoid poisoning attacks; and (v) The use of automated feature engineering should be encouraged for efficiently detecting data exfiltration attacks.


Author(s):  
Aya F. Jabbar ◽  
Imad J. Mohammed

<p><span>A Botnet is one of many attacks that can execute malicious tasks and develop continuously. Therefore, current research introduces a comparison framework, called BotDetectorFW, with classification and complexity improvements for the detection of Botnet attack using CICIDS2017 dataset. It is a free online dataset consist of several attacks with high-dimensions features. The process of feature selection is a significant step to obtain the least features by eliminating irrelated features and consequently reduces the detection time. This process implemented inside BotDetectorFW using two steps; data clustering and five distance measure formulas (cosine, dice, driver &amp; kroeber, overlap, and pearson correlation) using C#, followed by selecting the best N features used as input into four classifier algorithms evaluated using machine learning (WEKA); multilayerperceptron, JRip, IBK, and random forest. In BotDetectorFW, the thoughtful and diligent cleaning of the dataset within the preprocessing stage beside the normalization, binary clustering of its features, followed by the adapting of feature selection based on suitable feature distance techniques, and finalized by testing of selected classification algorithms. All together contributed in satisfying the high-performance metrics using fewer features number (8 features as a minimum) compared to and outperforms other methods found in the literature that adopted (10 features or higher) using the same dataset. Furthermore, the results and performance evaluation of BotDetectorFM shows a competitive impact in terms of classification accuracy (ACC), precision (Pr), recall (Rc), and f-measure (F1) metrics.</span></p>


Sign in / Sign up

Export Citation Format

Share Document