scholarly journals An intelligent irrigation system based on internet of things (IoT) to minimize water loss

Author(s):  
Samar Amassmir ◽  
Said Tkatek ◽  
Otman Abdoun ◽  
Jaafar Abouchabaka

This paper proposes a comparison of three machine learning algorithms for a better intelligent irrigation system based on internet of things (IoT) for differents products. This work's major contribution is to specify the most accurate algorithm among the three machine learning algorithms (k-nearest neighbors (KNN), support vector machine (SVM), artificial neural network (ANN)). This is achieved by collecting irrigation data of a specific products and split it into training data and test data then compare the accuracy of the three algorithms. To evaluate the performance of our algorithm we built a system of IoT devices. The temperature and humidity sensors are installed in the field interact with the Arduino microcontroller. The Arduino is connected to Raspberry Pi3, which holds the machine learning algorithm. It turned out to be ANN algorithm is the most accurate for such system of irrigation. The ANN algorithm is the best choice for an intelligent system to minimize water loss for some products.

Diagnostics ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 104 ◽  
Author(s):  
Ahmed ◽  
Yigit ◽  
Isik ◽  
Alpkocak

Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multiclass classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other wellknown machine learning algorithms.


2021 ◽  
Vol 30 (04) ◽  
pp. 2150020
Author(s):  
Luke Holbrook ◽  
Miltiadis Alamaniotis

With the increase of cyber-attacks on millions of Internet of Things (IoT) devices, the poor network security measures on those devices are the main source of the problem. This article aims to study a number of these machine learning algorithms available for their effectiveness in detecting malware in consumer internet of things devices. In particular, the Support Vector Machines (SVM), Random Forest, and Deep Neural Network (DNN) algorithms are utilized for a benchmark with a set of test data and compared as tools in safeguarding the deployment for IoT security. Test results on a set of 4 IoT devices exhibited that all three tested algorithms presented here detect the network anomalies with high accuracy. However, the deep neural network provides the highest coefficient of determination R2, and hence, it is identified as the most precise among the tested algorithms concerning the security of IoT devices based on the data sets we have undertaken.


2020 ◽  
Vol 9 (1) ◽  
Author(s):  
E. Popoff ◽  
M. Besada ◽  
J. P. Jansen ◽  
S. Cope ◽  
S. Kanters

Abstract Background Despite existing research on text mining and machine learning for title and abstract screening, the role of machine learning within systematic literature reviews (SLRs) for health technology assessment (HTA) remains unclear given lack of extensive testing and of guidance from HTA agencies. We sought to address two knowledge gaps: to extend ML algorithms to provide a reason for exclusion—to align with current practices—and to determine optimal parameter settings for feature-set generation and ML algorithms. Methods We used abstract and full-text selection data from five large SLRs (n = 3089 to 12,769 abstracts) across a variety of disease areas. Each SLR was split into training and test sets. We developed a multi-step algorithm to categorize each citation into the following categories: included; excluded for each PICOS criterion; or unclassified. We used a bag-of-words approach for feature-set generation and compared machine learning algorithms using support vector machines (SVMs), naïve Bayes (NB), and bagged classification and regression trees (CART) for classification. We also compared alternative training set strategies: using full data versus downsampling (i.e., reducing excludes to balance includes/excludes because machine learning algorithms perform better with balanced data), and using inclusion/exclusion decisions from abstract versus full-text screening. Performance comparisons were in terms of specificity, sensitivity, accuracy, and matching the reason for exclusion. Results The best-fitting model (optimized sensitivity and specificity) was based on the SVM algorithm using training data based on full-text decisions, downsampling, and excluding words occurring fewer than five times. The sensitivity and specificity of this model ranged from 94 to 100%, and 54 to 89%, respectively, across the five SLRs. On average, 75% of excluded citations were excluded with a reason and 83% of these citations matched the reviewers’ original reason for exclusion. Sensitivity significantly improved when both downsampling and abstract decisions were used. Conclusions ML algorithms can improve the efficiency of the SLR process and the proposed algorithms could reduce the workload of a second reviewer by identifying exclusions with a relevant PICOS reason, thus aligning with HTA guidance. Downsampling can be used to improve study selection, and improvements using full-text exclusions have implications for a learn-as-you-go approach.


SPE Journal ◽  
2020 ◽  
Vol 25 (03) ◽  
pp. 1241-1258 ◽  
Author(s):  
Ruizhi Zhong ◽  
Raymond L. Johnson ◽  
Zhongwei Chen

Summary Accurate coal identification is critical in coal seam gas (CSG) (also known as coalbed methane or CBM) developments because it determines well completion design and directly affects gas production. Density logging using radioactive source tools is the primary tool for coal identification, adding well trips to condition the hole and additional well costs for logging runs. In this paper, machine learning methods are applied to identify coals from drilling and logging-while-drilling (LWD) data to reduce overall well costs. Machine learning algorithms include logistic regression (LR), support vector machine (SVM), artificial neural network (ANN), random forest (RF), and extreme gradient boosting (XGBoost). The precision, recall, and F1 score are used as evaluation metrics. Because coal identification is an imbalanced data problem, the performance on the minority class (i.e., coals) is limited. To enhance the performance on coal prediction, two data manipulation techniques [naive random oversampling (NROS) technique and synthetic minority oversampling technique (SMOTE)] are separately coupled with machine learning algorithms. Case studies are performed with data from six wells in the Surat Basin, Australia. For the first set of experiments (single-well experiments), both the training data and test data are in the same well. The machine learning methods can identify coal pay zones for sections with poor or missing logs. It is found that rate of penetration (ROP) is the most important feature. The second set of experiments (multiple-well experiments) uses the training data from multiple nearby wells, which can predict coal pay zones in a new well. The most important feature is gamma ray. After placing slotted casings, all wells have coal identification rates greater than 90%, and three wells have coal identification rates greater than 99%. This indicates that machine learning methods (either XGBoost or ANN/RF with NROS/SMOTE) can be an effective way to identify coal pay zones and reduce coring or logging costs in CSG developments.


Author(s):  
D. Vito

<p><strong>Abstract.</strong> Natural disasters such as flood are regarded to be caused by extreme weather conditions as well as changes in global and regional climate.<br> The prediction of flood incoming is a key factor to ensure civil protection in case of emergency and to provide effective early warning system. The risk of flood is affected by several factors such as land use, meteorological events, hydrology and the topology of the land.<br> Predict such a risk implies the use of data coming from different sources such satellite images, water basin levels, meteorological and GIS data, that nowadays are easily produced by the availability new satellite portals as SENTINEL and distributed sensor networks on the field.<br> In order to have a comprehensive and accurate prediction of flood risk is essential to perform a selective and multivariate analyses among the different types of inputs.<br> Multivariate Analysis refers to all statistical techniques that simultaneously analyse multiple variables.<br> Among multivariate analyses, Machine learning to provide increasing levels of accuracy precision and efficiency by discovering patterns in large and heterogeneous input datasets.<br> Basically, machine learning algorithms automatically acquire experience information from data.<br> This is done by the process of learning, by which the algorithm can generalize beyond the examples given by training data in input. Machine learning is interesting for predictions because it adapts the resolution strategies to the features of the data. This peculiarity can be used to predict extreme from high variable data, as in the case of floods.<br> This work propose strategies and case studies on the application on machine learning algorithms on floods events prediction.<br> Particullarly the study will focus on the application of Support Vector Machines and Artificial Neural Networks on a multivariate set of data related to river Seveso, in order to propose a more general framework from the case study.</p>


2021 ◽  
Vol 7 ◽  
pp. e578
Author(s):  
Ashutosh Bhoi ◽  
Rajendra Prasad Nayak ◽  
Sourav Kumar Bhoi ◽  
Srinivas Sethi ◽  
Sanjaya Kumar Panda ◽  
...  

In the traditional irrigation process, a huge amount of water consumption is required which leads to water wastage. To reduce the wasting of water for this tedious task, an intelligent irrigation system is urgently needed. The era of machine learning (ML) and the Internet of Things (IoT) brings it is a great advantage of building an intelligent system that performs this task automatically with minimal human effort. In this study, an IoT enabled ML-trained recommendation system is proposed for efficient water usage with the nominal intervention of farmers. IoT devices are deployed in the crop field to precisely collect the ground and environmental details. The gathered data are forwarded and stored in a cloud-based server, which applies ML approaches to analyze data and suggest irrigation to the farmer. To make the system robust and adaptive, an inbuilt feedback mechanism is added to this recommendation system. The experimentation, reveals that the proposed system performs quite well on our own collected dataset and National Institute of Technology (NIT) Raipur crop dataset.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Hang Chen ◽  
Sulaiman Khan ◽  
Bo Kou ◽  
Shah Nazir ◽  
Wei Liu ◽  
...  

Generally, the emergence of Internet of Things enabled applications inspired the world during the last few years, providing state-of-the-art and novel-based solutions for different problems. This evolutionary field is mainly lead by wireless sensor network, radio frequency identification, and smart mobile technologies. Among others, the IoT plays a key role in the form of smart medical devices and wearables, with the ability to collect varied and longitudinal patient-generated health data, and at the same time also offering preliminary diagnosis options. In terms of efforts made for helping the patients using IoT-based solutions, experts exploit capabilities of the machine learning algorithms to provide efficient solutions in hemorrhage diagnosis. To reduce the death rates and propose accurate treatment, this paper presents a smart IoT-based application using machine learning algorithms for the human brain hemorrhage diagnosis. Based on the computerized tomography scan images for intracranial dataset, the support vector machine and feedforward neural network have been applied for the classification purposes. Overall, classification results of 80.67% and 86.7% are calculated for the support vector machine and feedforward neural network, respectively. It is concluded from the resultant analysis that the feedforward neural network outperforms in classifying intracranial images. The output generated from the classification tool gives information about the type of brain hemorrhage that ultimately helps in validating expert’s diagnosis and is treated as a learning tool for trainee radiologists to minimize the errors in the available systems.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-15
Author(s):  
Sameena Naaz

Phishing attacks are growing in the similar manner as e-commerce industries are growing. Prediction and prevention of phishing attacks is a very critical step towards safeguarding online transactions. Data mining tools can be applied in this regard as the technique is very easy and can mine millions of information within seconds and deliver accurate results. With the help of machine learning algorithms like random forest, decision tree, neural network, and linear model, we can classify data into phishing, suspicious, and legitimate. The devices that are connected over the internet, known as internet of things (IoT), are also at very high risk of phishing attack. In this work, machine learning algorithms random forest classifier, support vector machine, and logistic regression have been applied on IoT dataset for detection of phishing attacks, and then the results have been compared with previous work carried out on the same dataset as well as on a different dataset. The results of these algorithms have then been compared in terms of accuracy, error rate, precision, and recall.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


2018 ◽  
Vol 6 (2) ◽  
pp. 283-286
Author(s):  
M. Samba Siva Rao ◽  
◽  
M.Yaswanth . ◽  
K. Raghavendra Swamy ◽  
◽  
...  

Sign in / Sign up

Export Citation Format

Share Document