scholarly journals A review on a machine learning approach of an intelligent irrigation monitoring system with edge computing and the internet of things

2021 ◽  
Vol 896 (1) ◽  
pp. 012029
Author(s):  
L R Loua ◽  
M A Budihardjo ◽  
S Sudarno

Abstract Water consumption during irrigation has been a much-researched area in agricultural activities, and due to the frugal nature of different practiced irrigation systems, quite a sufficient amount of water is wasted. As a result, intelligent systems have been designed to integrate water-saving techniques and climatic data collection to improve irrigation. An innovative decision-making system was developed that used Ontology to make 50% of the decision while sensor values make the remaining 50%. Collectively, the system bases its decision on a KNN machine learning algorithm for irrigation scheduling. It also uses two different database servers, an edge and an IoT server, along with a GSM module to reduce the burden of the data transmission while also reducing the latency rate. With this method, the sensors could trace and analyze the data within the network using the edge server before transferring it to the IoT server for future watering requirements. The water-saving technique ensured that the crops obtained the required amount of water to ensure crop growth and prevent the soil from reaching its wilting point. Furthermore, the reduced irrigation water also limits the potential runoff events. The results were displayed using an android application.

2019 ◽  
Vol 109 (05) ◽  
pp. 352-357
Author(s):  
C. Brecher ◽  
L. Gründel ◽  
L. Lienenlüke ◽  
S. Storms

Die Lageregelung von konventionellen Industrierobotern ist nicht auf den dynamischen Fräsprozess ausgelegt. Eine Möglichkeit, das Verhalten der Regelkreise zu optimieren, ist eine modellbasierte Momentenvorsteuerung, welche in dieser Arbeit aufgrund vieler Vorteile durch einen Machine-Learning-Ansatz erweitert wird. Hierzu wird die Umsetzung in Matlab und die simulative Evaluation erläutert, die im Anschluss das Potenzial dieses Konzeptes bestätigt.   The position control of conventional industrial robots is not designed for the dynamic milling process. One possibility to optimize the behavior of the control loops is a model-based feed-forward torque control which is supported by a machine learning approach due to many advantages. The implementation in Matlab and the simulative evaluation are explained, which subsequently confirms the potential of this concept.


Author(s):  
X.-F. Xing ◽  
M. A. Mostafavi ◽  
G. Edwards ◽  
N. Sabo

<p><strong>Abstract.</strong> Automatic semantic segmentation of point clouds observed in a 3D complex urban scene is a challenging issue. Semantic segmentation of urban scenes based on machine learning algorithm requires appropriate features to distinguish objects from mobile terrestrial and airborne LiDAR point clouds in point level. In this paper, we propose a pointwise semantic segmentation method based on our proposed features derived from Difference of Normal and the features “directional height above” that compare height difference between a given point and neighbors in eight directions in addition to the features based on normal estimation. Random forest classifier is chosen to classify points in mobile terrestrial and airborne LiDAR point clouds. The results obtained from our experiments show that the proposed features are effective for semantic segmentation of mobile terrestrial and airborne LiDAR point clouds, especially for vegetation, building and ground classes in an airborne LiDAR point clouds in urban areas.</p>


2017 ◽  
Author(s):  
Aymen A. Elfiky ◽  
Maximilian J. Pany ◽  
Ravi B. Parikh ◽  
Ziad Obermeyer

ABSTRACTBackgroundCancer patients who die soon after starting chemotherapy incur costs of treatment without benefits. Accurately predicting mortality risk from chemotherapy is important, but few patient data-driven tools exist. We sought to create and validate a machine learning model predicting mortality for patients starting new chemotherapy.MethodsWe obtained electronic health records for patients treated at a large cancer center (26,946 patients; 51,774 new regimens) over 2004-14, linked to Social Security data for date of death. The model was derived using 2004-11 data, and performance measured on non-overlapping 2012-14 data.Findings30-day mortality from chemotherapy start was 2.1%. Common cancers included breast (21.1%), colorectal (19.3%), and lung (18.0%). Model predictions were accurate for all patients (AUC 0.94). Predictions for patients starting palliative chemotherapy (46.6% of regimens), for whom prognosis is particularly important, remained highly accurate (AUC 0.92). To illustrate model discrimination, we ranked patients initiating palliative chemotherapy by model-predicted mortality risk, and calculated observed mortality by risk decile. 30-day mortality in the highest-risk decile was 22.6%; in the lowest-risk decile, no patients died. Predictions remained accurate across all primary cancers, stages, and chemotherapies—even for clinical trial regimens that first appeared in years after the model was trained (AUC 0.94). The model also performed well for prediction of 180-day mortality (AUC 0.87; mortality 74.8% in the highest risk decile vs. 0.2% in the lowest). Predictions were more accurate than data from randomized trials of individual chemotherapies, or SEER estimates.InterpretationA machine learning algorithm accurately predicted short-term mortality in patients starting chemotherapy using EHR data. Further research is necessary to determine generalizability and the feasibility of applying this algorithm in clinical settings.


2021 ◽  
Author(s):  
Marian Popescu ◽  
Rebecca Head ◽  
Tim Ferriday ◽  
Kate Evans ◽  
Jose Montero ◽  
...  

Abstract This paper presents advancements in machine learning and cloud deployment that enable rapid and accurate automated lithology interpretation. A supervised machine learning technique is described that enables rapid, consistent, and accurate lithology prediction alongside quantitative uncertainty from large wireline or logging-while-drilling (LWD) datasets. To leverage supervised machine learning, a team of geoscientists and petrophysicists made detailed lithology interpretations of wells to generate a comprehensive training dataset. Lithology interpretations were based on applying determinist cross-plotting by utilizing and combining various raw logs. This training dataset was used to develop a model and test a machine learning pipeline. The pipeline was applied to a dataset previously unseen by the algorithm, to predict lithology. A quality checking process was performed by a petrophysicist to validate new predictions delivered by the pipeline against human interpretations. Confidence in the interpretations was assessed in two ways. The prior probability was calculated, a measure of confidence in the input data being recognized by the model. Posterior probability was calculated, which quantifies the likelihood that a specified depth interval comprises a given lithology. The supervised machine learning algorithm ensured that the wells were interpreted consistently by removing interpreter biases and inconsistencies. The scalability of cloud computing enabled a large log dataset to be interpreted rapidly; &gt;100 wells were interpreted consistently in five minutes, yielding &gt;70% lithological match to the human petrophysical interpretation. Supervised machine learning methods have strong potential for classifying lithology from log data because: 1) they can automatically define complex, non-parametric, multi-variate relationships across several input logs; and 2) they allow classifications to be quantified confidently. Furthermore, this approach captured the knowledge and nuances of an interpreter's decisions by training the algorithm using human-interpreted labels. In the hydrocarbon industry, the quantity of generated data is predicted to increase by &gt;300% between 2018 and 2023 (IDC, Worldwide Global DataSphere Forecast, 2019–2023). Additionally, the industry holds vast legacy data. This supervised machine learning approach can unlock the potential of some of these datasets by providing consistent lithology interpretations rapidly, allowing resources to be used more effectively.


Author(s):  
Namrata Dhanda ◽  
Stuti Shukla Datta ◽  
Mudrika Dhanda

Human intelligence is deeply involved in creating efficient and faster systems that can work independently. Creation of such smart systems requires efficient training algorithms. Thus, the aim of this chapter is to introduce the readers with the concept of machine learning and the commonly employed learning algorithm for developing efficient and intelligent systems. The chapter gives a clear distinction between supervised and unsupervised learning methods. Each algorithm is explained with the help of suitable example to give an insight to the learning process.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 1015 ◽  
Author(s):  
Carles Bretó ◽  
Priscila Espinosa ◽  
Penélope Hernández ◽  
Jose M. Pavía

This paper applies a Machine Learning approach with the aim of providing a single aggregated prediction from a set of individual predictions. Departing from the well-known maximum-entropy inference methodology, a new factor capturing the distance between the true and the estimated aggregated predictions presents a new problem. Algorithms such as ridge, lasso or elastic net help in finding a new methodology to tackle this issue. We carry out a simulation study to evaluate the performance of such a procedure and apply it in order to forecast and measure predictive ability using a dataset of predictions on Spanish gross domestic product.


Author(s):  
B.D. Britt ◽  
T. Glagowski

AbstractThis paper describes current research toward automating the redesign process. In redesign, a working design is altered to meet new problem specifications. This process is complicated by interactions between different parts of the design, and many researchers have addressed these issues. An overview is given of a large design tool under development, the Circuit Designer's Apprentice. This tool integrates various techniques for reengineering existing circuits so that they meet new circuit requirements. The primary focus of the paper is one particular technique being used to reengineer circuits when they cannot be transformed to meet the new problem requirements. In these cases, a design plan is automatically generated for the circuit, and then replayed to solve all or part of the new problem. This technique is based upon the derivational analogy approach to design reuse. Derivational Analogy is a machine learning algorithm in which a design plan is saved at the time of design so that it can be replayed on a new design problem. Because design plans were not saved for the circuits available to the Circuit Designer's Apprentice, an algorithm was developed that automatically reconstructs a design plan for any circuit. This algorithm, Reconstructive Derivational Analogy, is described in detail, including a quantitative analysis of the implementation of this algorithm.


Tehnika ◽  
2020 ◽  
Vol 75 (4) ◽  
pp. 279-283
Author(s):  
Dragutin Šević ◽  
Ana Vlašić ◽  
Maja Rabasović ◽  
Svetlana Savić-Šević ◽  
Mihailo Rabasović ◽  
...  

In this paper we analyze possibilities of application of Sr2CeO4:Eu3+ nanopowder for temperature sensing using machine learning. The material was prepared by simple solution combustion synthesis. Photoluminescence technique has been used to measure the optical emission temperature dependence of the prepared material. Principal Component Analysis, the basic machine learning algorithm, provided insight into temperature dependent spectral data from another point of view than usual approach.


Author(s):  
Tan Hui Xin ◽  
Ismahani Ismail ◽  
Ban Mohammed Khammas

Nowadays, computer virus attacks are getting very advanced. New obfuscated computer virus created by computer virus writers will generate a new shape of computer virus automatically for every single iteration and download. This constantly evolving computer virus has caused significant threat to information security of computer users, organizations and even government. However, signature based detection technique which is used by the conventional anti-computer virus software in the market fails to identify it as signatures are unavailable. This research proposed an alternative approach to the traditional signature based detection method and investigated the use of machine learning technique for obfuscated computer virus detection. In this work, text strings are used and have been extracted from virus program codes as the features to generate a suitable classifier model that can correctly classify obfuscated virus files. Text string feature is used as it is informative and potentially only use small amount of memory space. Results show that unknown files can be correctly classified with 99.5% accuracy using SMO classifier model. Thus, it is believed that current computer virus defense can be strengthening through machine learning approach.


2021 ◽  
Author(s):  
Diti Roy ◽  
Md. Ashiq Mahmood ◽  
Tamal Joyti Roy

<p>Heart Disease is the most dominating disease which is taking a large number of deaths every year. A report from WHO in 2016 portrayed that every year at least 17 million people die of heart disease. This number is gradually increasing day by day and WHO estimated that this death toll will reach the summit of 75 million by 2030. Despite having modern technology and health care system predicting heart disease is still beyond limitations. As the Machine Learning algorithm is a vital source predicting data from available data sets we have used a machine learning approach to predict heart disease. We have collected data from the UCI repository. In our study, we have used Random Forest, Zero R, Voted Perceptron, K star classifier. We have got the best result through the Random Forest classifier with an accuracy of 97.69.<i><b></b></i></p> <p><b> </b></p>


Sign in / Sign up

Export Citation Format

Share Document