Neural Network-based Pest Detection with K-Means Segmentation: Impact of Improved Dragonfly Algorithm

Author(s):  
Madhuri Devi Chodey ◽  
C Noorullah Shariff

Pest detection and identification of diseases in agricultural crops is essential to ensure good product since it is the major challenge in the field of agriculture. Therefore, effective measures should be taken to fight the infestation to minimise the use of pesticides. The techniques of image analysis are extensively applied to agricultural science that provides maximum protection to crops. This might obviously lead to better crop management and production. However, automatic pest detection with machine learning technology is still in the infant stage. Hence, the video processing-based pest detection framework is constructed in this work by following six major phases, viz. (a) Video Frame Acquisition, (b) Pre-processing, (c) Object Tracking, (d) Foreground and Background Segmentation, (e) Feature Extraction, and (f) Classification. Initially, the moving frames of videos are pre-processed, and the movement of the object is tracked with the aid of the foreground and background segmentation approach via K-Means clustering. From the segmented image, a new feature evaluation termed as Distributed Intensity-based LBP features (DI-LBP) along with edges and colour are extracted. Further, the features are subjected to a classification process, where an optimised Neural Network (NN) is used. As a novelty, the training of NN will be carried out using a new Dragonfly with New Levy Update (D-NU) algorithm via updating the weight. Finally, the performance of the proposed model is analysed over other conventional models with respect to certain performance measures for both video and image datasets.

2020 ◽  
Vol 13 (5) ◽  
pp. 224
Author(s):  
Dimas Okky Anggriawan ◽  
Rauf Hanrif Mubarok ◽  
Eka Prasetyono ◽  
Endro Wahjono ◽  
M. Iqbal Fitrianto ◽  
...  

2020 ◽  
Vol 11 (1) ◽  
pp. 10
Author(s):  
Muchun Su ◽  
Diana Wahyu Hayati ◽  
Shaowu Tseng ◽  
Jiehhaur Chen ◽  
Hsihsien Wei

Health care for independently living elders is more important than ever. Automatic recognition of their Activities of Daily Living (ADL) is the first step to solving the health care issues faced by seniors in an efficient way. The paper describes a Deep Neural Network (DNN)-based recognition system aimed at facilitating smart care, which combines ADL recognition, image/video processing, movement calculation, and DNN. An algorithm is developed for processing skeletal data, filtering noise, and pattern recognition for identification of the 10 most common ADL including standing, bending, squatting, sitting, eating, hand holding, hand raising, sitting plus drinking, standing plus drinking, and falling. The evaluation results show that this DNN-based system is suitable method for dealing with ADL recognition with an accuracy rate of over 95%. The findings support the feasibility of this system that is efficient enough for both practical and academic applications.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


2021 ◽  
Vol 11 (15) ◽  
pp. 7104
Author(s):  
Xu Yang ◽  
Ziyi Huan ◽  
Yisong Zhai ◽  
Ting Lin

Nowadays, personalized recommendation based on knowledge graphs has become a hot spot for researchers due to its good recommendation effect. In this paper, we researched personalized recommendation based on knowledge graphs. First of all, we study the knowledge graphs’ construction method and complete the construction of the movie knowledge graphs. Furthermore, we use Neo4j graph database to store the movie data and vividly display it. Then, the classical translation model TransE algorithm in knowledge graph representation learning technology is studied in this paper, and we improved the algorithm through a cross-training method by using the information of the neighboring feature structures of the entities in the knowledge graph. Furthermore, the negative sampling process of TransE algorithm is improved. The experimental results show that the improved TransE model can more accurately vectorize entities and relations. Finally, this paper constructs a recommendation model by combining knowledge graphs with ranking learning and neural network. We propose the Bayesian personalized recommendation model based on knowledge graphs (KG-BPR) and the neural network recommendation model based on knowledge graphs(KG-NN). The semantic information of entities and relations in knowledge graphs is embedded into vector space by using improved TransE method, and we compare the results. The item entity vectors containing external knowledge information are integrated into the BPR model and neural network, respectively, which make up for the lack of knowledge information of the item itself. Finally, the experimental analysis is carried out on MovieLens-1M data set. The experimental results show that the two recommendation models proposed in this paper can effectively improve the accuracy, recall, F1 value and MAP value of recommendation.


2021 ◽  
pp. 60-70
Author(s):  
Piyush Kumar Shukla ◽  
◽  
Prashant Kumar Shukla ◽  

The interpretation of large data streams necessitates high-performance repeated transfers, which overload Microprocessor System on Chips (SoC). The effective direct memory access (DMA) controller performs bulk data transfers without the CPU's involvement. The Direct Memory Controller (DMAC) solves this by facilitating bulk data transfer and execution. In this work, we created an intelligent DMAC (I-DMAC) for accessing video processing data without using CPUs. The model includes Bus selection Module, User control signal, Status Register, DMA supported Address, and AXI-PCI subsystems for improved video frame analysis. These modules are experimentally verified in Xilinx FPGA SoC architecture using VHDL code simulation and results compared to the E-DMAC model.


2021 ◽  
Author(s):  
Phan Anh Cang ◽  
To Huynh Thien Truong ◽  
Cao Hùng Phi ◽  
Phan Thuong Cang

Author(s):  
Gauri Jain ◽  
Manisha Sharma ◽  
Basant Agarwal

This article describes how spam detection in the social media text is becoming increasing important because of the exponential increase in the spam volume over the network. It is challenging, especially in case of text within the limited number of characters. Effective spam detection requires more number of efficient features to be learned. In the current article, the use of a deep learning technology known as a convolutional neural network (CNN) is proposed for spam detection with an added semantic layer on the top of it. The resultant model is known as a semantic convolutional neural network (SCNN). A semantic layer is composed of training the random word vectors with the help of Word2vec to get the semantically enriched word embedding. WordNet and ConceptNet are used to find the word similar to a given word, in case it is missing in the word2vec. The architecture is evaluated on two corpora: SMS Spam dataset (UCI repository) and Twitter dataset (Tweets scrapped from public live tweets). The authors' approach outperforms the-state-of-the-art results with 98.65% accuracy on SMS spam dataset and 94.40% accuracy on Twitter dataset.


2020 ◽  
Vol 10 (18) ◽  
pp. 6417 ◽  
Author(s):  
Emanuele Lattanzi ◽  
Giacomo Castellucci ◽  
Valerio Freschi

Most road accidents occur due to human fatigue, inattention, or drowsiness. Recently, machine learning technology has been successfully applied to identifying driving styles and recognizing unsafe behaviors starting from in-vehicle sensors signals such as vehicle and engine speed, throttle position, and engine load. In this work, we investigated the fusion of different external sensors, such as a gyroscope and a magnetometer, with in-vehicle sensors, to increase machine learning identification of unsafe driver behavior. Starting from those signals, we computed a set of features capable to accurately describe the behavior of the driver. A support vector machine and an artificial neural network were then trained and tested using several features calculated over more than 200 km of travel. The ground truth used to evaluate classification performances was obtained by means of an objective methodology based on the relationship between speed, and lateral and longitudinal acceleration of the vehicle. The classification results showed an average accuracy of about 88% using the SVM classifier and of about 90% using the neural network demonstrating the potential capability of the proposed methodology to identify unsafe driver behaviors.


Sign in / Sign up

Export Citation Format

Share Document