scholarly journals Machine Learning in Agriculture: A Review

Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2674 ◽  
Author(s):  
Konstantinos Liakos ◽  
Patrizia Busato ◽  
Dimitrios Moshou ◽  
Simon Pearson ◽  
Dionysis Bochtis

Machine learning has emerged with big data technologies and high-performance computing to create new opportunities for data intensive science in the multi-disciplinary agri-technologies domain. In this paper, we present a comprehensive review of research dedicated to applications of machine learning in agricultural production systems. The works analyzed were categorized in (a) crop management, including applications on yield prediction, disease detection, weed detection crop quality, and species recognition; (b) livestock management, including applications on animal welfare and livestock production; (c) water management; and (d) soil management. The filtering and classification of the presented articles demonstrate how agriculture will benefit from machine learning technologies. By applying machine learning to sensor data, farm management systems are evolving into real time artificial intelligence enabled programs that provide rich recommendations and insights for farmer decision support and action.

Author(s):  
Francis J Alexander ◽  
James Ang ◽  
Jenna A Bilbrey ◽  
Jan Balewski ◽  
Tiernan Casey ◽  
...  

Rapid growth in data, computational methods, and computing power is driving a remarkable revolution in what variously is termed machine learning (ML), statistical learning, computational learning, and artificial intelligence. In addition to highly visible successes in machine-based natural language translation, playing the game Go, and self-driving cars, these new technologies also have profound implications for computational and experimental science and engineering, as well as for the exascale computing systems that the Department of Energy (DOE) is developing to support those disciplines. Not only do these learning technologies open up exciting opportunities for scientific discovery on exascale systems, they also appear poised to have important implications for the design and use of exascale computers themselves, including high-performance computing (HPC) for ML and ML for HPC. The overarching goal of the ExaLearn co-design project is to provide exascale ML software for use by Exascale Computing Project (ECP) applications, other ECP co-design centers, and DOE experimental facilities and leadership class computing facilities.


2020 ◽  
Vol 10 (7) ◽  
pp. 2401 ◽  
Author(s):  
Ditsuhi Iskandaryan ◽  
Francisco Ramos ◽  
Sergio Trilles

The influence of machine learning technologies is rapidly increasing and penetrating almost in every field, and air pollution prediction is not being excluded from those fields. This paper covers the revision of the studies related to air pollution prediction using machine learning algorithms based on sensor data in the context of smart cities. Using the most popular databases and executing the corresponding filtration, the most relevant papers were selected. After thorough reviewing those papers, the main features were extracted, which served as a base to link and compare them to each other. As a result, we can conclude that: (1) instead of using simple machine learning techniques, currently, the authors apply advanced and sophisticated techniques, (2) China was the leading country in terms of a case study, (3) Particulate matter with diameter equal to 2.5 micrometers was the main prediction target, (4) in 41% of the publications the authors carried out the prediction for the next day, (5) 66% of the studies used data had an hourly rate, (6) 49% of the papers used open data and since 2016 it had a tendency to increase, and (7) for efficient air quality prediction it is important to consider the external factors such as weather conditions, spatial characteristics, and temporal features.


2020 ◽  
Vol 50 (1) ◽  
pp. 1-25 ◽  
Author(s):  
Changwon Suh ◽  
Clyde Fare ◽  
James A. Warren ◽  
Edward O. Pyzer-Knapp

Machine learning, applied to chemical and materials data, is transforming the field of materials discovery and design, yet significant work is still required to fully take advantage of machine learning algorithms, tools, and methods. Here, we review the accomplishments to date of the community and assess the maturity of state-of-the-art, data-intensive research activities that combine perspectives from materials science and chemistry. We focus on three major themes—learning to see, learning to estimate, and learning to search materials—to show how advanced computational learning technologies are rapidly and successfully used to solve materials and chemistry problems. Additionally, we discuss a clear path toward a future where data-driven approaches to materials discovery and design are standard practice.


2020 ◽  
Vol 24 (5) ◽  
pp. 709-722
Author(s):  
Kieran Woodward ◽  
Eiman Kanjo ◽  
Andreas Oikonomou ◽  
Alan Chamberlain

Abstract In recent years, machine learning has developed rapidly, enabling the development of applications with high levels of recognition accuracy relating to the use of speech and images. However, other types of data to which these models can be applied have not yet been explored as thoroughly. Labelling is an indispensable stage of data pre-processing that can be particularly challenging, especially when applied to single or multi-model real-time sensor data collection approaches. Currently, real-time sensor data labelling is an unwieldy process, with a limited range of tools available and poor performance characteristics, which can lead to the performance of the machine learning models being compromised. In this paper, we introduce new techniques for labelling at the point of collection coupled with a pilot study and a systematic performance comparison of two popular types of deep neural networks running on five custom built devices and a comparative mobile app (68.5–89% accuracy within-device GRU model, 92.8% highest LSTM model accuracy). These devices are designed to enable real-time labelling with various buttons, slide potentiometer and force sensors. This exploratory work illustrates several key features that inform the design of data collection tools that can help researchers select and apply appropriate labelling techniques to their work. We also identify common bottlenecks in each architecture and provide field tested guidelines to assist in building adaptive, high-performance edge solutions.


2021 ◽  
Vol 11 (24) ◽  
pp. 11910
Author(s):  
Dalia Mahmoud ◽  
Marcin Magolon ◽  
Jan Boer ◽  
M.A Elbestawi ◽  
Mohammad Ghayoomi Mohammadi

One of the main issues hindering the adoption of parts produced using laser powder bed fusion (L-PBF) in safety-critical applications is the inconsistencies in quality levels. Furthermore, the complicated nature of the L-PBF process makes optimizing process parameters to reduce these defects experimentally challenging and computationally expensive. To address this issue, sensor-based monitoring of the L-PBF process has gained increasing attention in recent years. Moreover, integrating machine learning (ML) techniques to analyze the collected sensor data has significantly improved the defect detection process aiming to apply online control. This article provides a comprehensive review of the latest applications of ML for in situ monitoring and control of the L-PBF process. First, the main L-PBF process signatures are described, and the suitable sensor and specifications that can monitor each signature are reviewed. Next, the most common ML learning approaches and algorithms employed in L-PBFs are summarized. Then, an extensive comparison of the different ML algorithms used for defect detection in the L-PBF process is presented. The article then describes the ultimate goal of applying ML algorithms for in situ sensors, which is closing the loop and taking online corrective actions. Finally, some current challenges and ideas for future work are also described to provide a perspective on the future directions for research dealing with using ML applications for defect detection and control for the L-PBF processes.


2021 ◽  
Vol 4 (3) ◽  
pp. 40
Author(s):  
Abdul Majeed

During the ongoing pandemic of the novel coronavirus disease 2019 (COVID-19), latest technologies such as artificial intelligence (AI), blockchain, learning paradigms (machine, deep, smart, few short, extreme learning, etc.), high-performance computing (HPC), Internet of Medical Things (IoMT), and Industry 4.0 have played a vital role. These technologies helped to contain the disease’s spread by predicting contaminated people/places, as well as forecasting future trends. In this article, we provide insights into the applications of machine learning (ML) and high-performance computing (HPC) in the era of COVID-19. We discuss the person-specific data that are being collected to lower the COVID-19 spread and highlight the remarkable opportunities it provides for knowledge extraction leveraging low-cost ML and HPC techniques. We demonstrate the role of ML and HPC in the context of the COVID-19 era with the successful implementation or proposition in three contexts: (i) ML and HPC use in the data life cycle, (ii) ML and HPC use in analytics on COVID-19 data, and (iii) the general-purpose applications of both techniques in COVID-19’s arena. In addition, we discuss the privacy and security issues and architecture of the prototype system to demonstrate the proposed research. Finally, we discuss the challenges of the available data and highlight the issues that hinder the applicability of ML and HPC solutions on it.


Amicus Curiae ◽  
2020 ◽  
Vol 1 (3) ◽  
pp. 338-360
Author(s):  
Jamie Grace ◽  
Roxanne Bamford

Policymaking is increasingly being informed by ‘big data’ technologies of analytics, machine learning and artificial intelligence (AI). John Rawls used particular principles of reasoning in his 1971 book, A Theory of Justice, which might help explore known problems of data bias, unfairness, accountability and privacy, in relation to applications of machine learning and AI in government. This paper will investigate how the current assortment of UK governmental policy and regulatory developments around AI in the public sector could be said to meet, or not meet, these Rawlsian principles, and what we might do better by incorporating them when we respond legislatively to this ongoing challenge. This paper uses a case study of data analytics and machine-learning regulation as the central means of this exploration of Rawlsian thinking in relation to the redevelopment of algorithmic governance.


2020 ◽  
Vol 4 (4) ◽  
pp. 108
Author(s):  
Bastian Engelmann ◽  
Simon Schmitt ◽  
Eddi Miller ◽  
Volker Bräutigam ◽  
Jan Schmitt

The performance indicator, Overall Equipment Effectiveness (OEE), is one of the most important ones for production control, as it merges information of equipment usage, process yield, and product quality. The determination of the OEE is oftentimes not transparent in companies, due to the heterogeneous data sources and manual interference. Furthermore, there is a difference in present guidelines to calculate the OEE. Due to a big amount of sensor data in Cyber Physical Production Systems, Machine Learning methods can be used in order to detect several elements of the OEE by a trained model. Changeover time is one crucial aspect influencing the OEE, as it adds no value to the product. Furthermore, changeover processes are fulfilled manually and vary from worker to worker. They always have their own procedure to conduct a changeover of a machine for a new product or production lot. Hence, the changeover time as well as the process itself vary. Thus, a new Machine Learning based concept for identification and characterization of machine set-up actions is presented. Here, the issue to be dealt with is the necessity of human and machine interaction to fulfill the entire machine set-up process. Because of this, the paper shows the use case in a real production scenario of a small to medium size company (SME), the derived data set, promising Machine Learning algorithms, as well as the results of the implemented Machine Learning model to classify machine set-up actions.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4430 ◽  
Author(s):  
Anyi Li ◽  
Xiaohui Yang ◽  
Huanyu Dong ◽  
Zihao Xie ◽  
Chunsheng Yang

An emerging prognostic and health management (PHM) technology has recently attracted a great deal of attention from academies, industries, and governments. The need for higher equipment availability and lower maintenance cost is driving the development and integration of prognostic and health management systems. PHM models depend on the smart sensors and data generated from sensors. This paper proposed a machine learning-based methods for developing PHM models from sensor data to perform fault diagnostic for transformer systems in a smart grid. In particular, we apply the Cuckoo Search (CS) algorithm to optimize the Back-propagation (BP) neural network in order to build high performance fault diagnostics models. The models were developed using sensor data called dissolved gas data in oil of the power transformer. We validated the models using real sensor data collected from power transformers in China. The results demonstrate that the developed meta heuristic algorithm for optimizing the parameters of the neural network is effective and useful; and machine learning-based models significantly improved the performance and accuracy of fault diagnosis/detection for power transformer PHM.


2021 ◽  
Vol 12 ◽  
Author(s):  
Lei Xu ◽  
Xiaoqing Ru ◽  
Rong Song

Exploring drug–target interactions by biomedical experiments requires a lot of human, financial, and material resources. To save time and cost to meet the needs of the present generation, machine learning methods have been introduced into the prediction of drug–target interactions. The large amount of available drug and target data in existing databases, the evolving and innovative computer technologies, and the inherent characteristics of various types of machine learning have made machine learning techniques the mainstream method for drug–target interaction prediction research. In this review, details of the specific applications of machine learning in drug–target interaction prediction are summarized, the characteristics of each algorithm are analyzed, and the issues that need to be further addressed and explored for future research are discussed. The aim of this review is to provide a sound basis for the construction of high-performance models.


Sign in / Sign up

Export Citation Format

Share Document