scholarly journals Remote Insects Trap Monitoring System Using Deep Learning Framework and IoT

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5280
Author(s):  
Balakrishnan Ramalingam ◽  
Rajesh Elara Mohan ◽  
Sathian Pookkuttath ◽  
Braulio Félix Gómez ◽  
Charan Satya Chandra Sairam Borusu ◽  
...  

Insect detection and control at an early stage are essential to the built environment (human-made physical spaces such as homes, hotels, camps, hospitals, parks, pavement, food industries, etc.) and agriculture fields. Currently, such insect control measures are manual, tedious, unsafe, and time-consuming labor dependent tasks. With the recent advancements in Artificial Intelligence (AI) and the Internet of things (IoT), several maintenance tasks can be automated, which significantly improves productivity and safety. This work proposes a real-time remote insect trap monitoring system and insect detection method using IoT and Deep Learning (DL) frameworks. The remote trap monitoring system framework is constructed using IoT and the Faster RCNN (Region-based Convolutional Neural Networks) Residual neural Networks 50 (ResNet50) unified object detection framework. The Faster RCNN ResNet 50 object detection framework was trained with built environment insects and farm field insect images and deployed in IoT. The proposed system was tested in real-time using four-layer IoT with built environment insects image captured through sticky trap sheets. Further, farm field insects were tested through a separate insect image database. The experimental results proved that the proposed system could automatically identify the built environment insects and farm field insects with an average of 94% accuracy.

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8381
Author(s):  
Duarte Fernandes ◽  
Tiago Afonso ◽  
Pedro Girão ◽  
Dibet Gonzalez ◽  
António Silva ◽  
...  

Recently released research about deep learning applications related to perception for autonomous driving focuses heavily on the usage of LiDAR point cloud data as input for the neural networks, highlighting the importance of LiDAR technology in the field of Autonomous Driving (AD). In this sense, a great percentage of the vehicle platforms used to create the datasets released for the development of these neural networks, as well as some AD commercial solutions available on the market, heavily invest in an array of sensors, including a large number of sensors as well as several sensor modalities. However, these costs create a barrier to entry for low-cost solutions for the performance of critical perception tasks such as Object Detection and SLAM. This paper explores current vehicle platforms and proposes a low-cost, LiDAR-based test vehicle platform capable of running critical perception tasks (Object Detection and SLAM) in real time. Additionally, we propose the creation of a deep learning-based inference model for Object Detection deployed in a resource-constrained device, as well as a graph-based SLAM implementation, providing important considerations, explored while taking into account the real-time processing requirement and presenting relevant results demonstrating the usability of the developed work in the context of the proposed low-cost platform.


Information ◽  
2019 ◽  
Vol 10 (5) ◽  
pp. 157 ◽  
Author(s):  
Daniel S. Berman

Domain generation algorithms (DGAs) represent a class of malware used to generate large numbers of new domain names to achieve command-and-control (C2) communication between the malware program and its C2 server to avoid detection by cybersecurity measures. Deep learning has proven successful in serving as a mechanism to implement real-time DGA detection, specifically through the use of recurrent neural networks (RNNs) and convolutional neural networks (CNNs). This paper compares several state-of-the-art deep-learning implementations of DGA detection found in the literature with two novel models: a deeper CNN model and a one-dimensional (1D) Capsule Networks (CapsNet) model. The comparison shows that the 1D CapsNet model performs as well as the best-performing model from the literature.


2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Kate Highnam ◽  
Domenic Puzio ◽  
Song Luo ◽  
Nicholas R. Jennings

AbstractBotnets and malware continue to avoid detection by static rule engines when using domain generation algorithms (DGAs) for callouts to unique, dynamically generated web addresses. Common DGA detection techniques fail to reliably detect DGA variants that combine random dictionary words to create domain names that closely mirror legitimate domains. To combat this, we created a novel hybrid neural network, Bilbo the “bagging” model, that analyses domains and scores the likelihood they are generated by such algorithms and therefore are potentially malicious. Bilbo is the first parallel usage of a convolutional neural network (CNN) and a long short-term memory (LSTM) network for DGA detection. Our unique architecture is found to be the most consistent in performance in terms of AUC, $$F_1$$ F 1 score, and accuracy when generalising across different dictionary DGA classification tasks compared to current state-of-the-art deep learning architectures. We validate using reverse-engineered dictionary DGA domains and detail our real-time implementation strategy for scoring real-world network logs within a large enterprise. In 4 h of actual network traffic, the model discovered at least five potential command-and-control networks that commercial vendor tools did not flag.


2019 ◽  
Author(s):  
Jimut Bahan Pal

It has been a real challenge for computers with low computing power and memory to detect objects in real time. After the invention of Convolution Neural Networks (CNN) it is easy for computers to detect images and recognize them. There are several technologies and models which can detect objects in real time, but most of them require high end technologies in terms of GPUs and TPUs. Though, recently many new algorithms and models have been proposed, which runs on low resources. In this paper we studied MobileNets to detect objects using webcam to successfully build a real time objectdetection system. We observed the pre trained model of the famous MS COCO dataset to achieve our purpose. Moreover, we applied Google’s open source TensorFlow as our back end. This real time object detection system may help in future to solve various complex vision problems.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6422
Author(s):  
Grega Morano ◽  
Andrej Hrovat ◽  
Matevž Vučnik ◽  
Janez Puhan ◽  
Gordana Gardašević ◽  
...  

The LOG-a-TEC testbed is a combined outdoor and indoor heterogeneous wireless testbed for experimentation with sensor networks and machine-type communications, which is included within the Fed4FIRE+ federation. It supports continuous deployment principles; however, it is missing an option to monitor and control the experiment in real-time, which is required for experiment execution under comparable conditions. The paper describes the implementation of the experiment control and monitoring system (EC and MS) as the upgrade of the LOG-a-TEC testbed. EC and MS is implemented within existing infrastructure management and built systems as a new service. The EC and MS is accessible as a new tab in sensor management system portal. It supports several commands, including start, stop and restart application, exit the experiment, flash or reset the target device, and displays the real-time status of the experiment application. When nodes apply Contiki-NG as their operating system, the Contiki-NG shell tool is accessible with the help of the newly developed tool, giving further experiment execution control capabilities to the user. By using the ZeroMQ concurrency framework as a message exchange system, information can be asynchronously sent to one or many devices at the same time, providing a real-time data exchange mechanism. The proposed upgrade does not disrupt any continuous deployment functionality and enables remote control and monitoring of the experiment. To evaluate the EC and MS functionality, two experiments were conducted: the first demonstrated the Bluetooth Low Energy (BLE) localization, while the second analysed interference avoidance in the 6TiSCH (IPv6 over the TSCH mode of IEEE 802.15.4e) wireless technology for the industrial Internet of Things (IIoT).


2021 ◽  
Vol 13 (3) ◽  
pp. 809-820
Author(s):  
V. Sowmya ◽  
R. Radha

Vehicle detection and recognition require demanding advanced computational intelligence and resources in a real-time traffic surveillance system for effective traffic management of all possible contingencies. One of the focus areas of deep intelligent systems is to facilitate vehicle detection and recognition techniques for robust traffic management of heavy vehicles. The following are such sophisticated mechanisms: Support Vector Machine (SVM), Convolutional Neural Networks (CNN), Regional Convolutional Neural Networks (R-CNN), You Only Look Once (YOLO) model, etcetera. Accordingly, it is pivotal to choose the precise algorithm for vehicle detection and recognition, which also addresses the real-time environment. In this study, a comparison of deep learning algorithms, such as the Faster R-CNN, YOLOv2, YOLOv3, and YOLOv4, are focused on diverse aspects of the features. Two entities for transport heavy vehicles, the buses and trucks, constitute detection and recognition elements in this proposed work. The mechanics of data augmentation and transfer-learning is implemented in the model; to build, execute, train, and test for detection and recognition to avoid over-fitting and improve speed and accuracy. Extensive empirical evaluation is conducted on two standard datasets such as COCO and PASCAL VOC 2007. Finally, comparative results and analyses are presented based on real-time.


2021 ◽  
Author(s):  
Adrian Ciobanu ◽  
Mihaela Luca ◽  
Tudor Barbu ◽  
Vasile Drug ◽  
Andrei Olteanu ◽  
...  

Author(s):  
Vibhavari B Rao

The crime rates today can inevitably put a civilian's life in danger. While consistent efforts are being made to alleviate crime, there is also a dire need to create a smart and proactive surveillance system. Our project implements a smart surveillance system that would alert the authorities in real-time when a crime is being committed. During armed robberies and hostage situations, most often, the police cannot reach the place on time to prevent it from happening, owing to the lag in communication between the informants of the crime scene and the police. We propose an object detection model that implements deep learning algorithms to detect objects of violence such as pistols, knives, rifles from video surveillance footage, and in turn send real-time alerts to the authorities. There are a number of object detection algorithms being developed, each being evaluated under the performance metric mAP. On implementing Faster R-CNN with ResNet 101 architecture we found the mAP score to be about 91%. However, the downside to this is the excessive training and inferencing time it incurs. On the other hand, YOLOv5 architecture resulted in a model that performed very well in terms of speed. Its training speed was found to be 0.012 s / image during training but naturally, the accuracy was not as high as Faster R-CNN. With good computer architecture, it can run at about 40 fps. Thus, there is a tradeoff between speed and accuracy and it's important to strike a balance. We use transfer learning to improve accuracy by training the model on our custom dataset. This project can be deployed on any generic CCTV camera by setting up a live RTSP (real-time streaming protocol) and streaming the footage on a laptop or desktop where the deep learning model is being run.


2019 ◽  
Author(s):  
Maria Galkin ◽  
Kashmala Rehman ◽  
Benjamin Schornstein ◽  
Warren Sunada-Wong ◽  
Harvey Wang

Sign in / Sign up

Export Citation Format

Share Document