scholarly journals Edge-Based Detection of Varroosis in Beehives with IoT Devices with Embedded and TPU-Accelerated Machine Learning

2021 ◽  
Vol 11 (22) ◽  
pp. 11078
Author(s):  
Dariusz Mrozek ◽  
Rafał Gȯrny ◽  
Anna Wachowicz ◽  
Bożena Małysiak-Mrozek

One of the causes of mortality in bees is varroosis, a bee disease caused by the Varroa destructor mite. Varroa destructor mites may occur suddenly in beehives, spread across them, and impair bee colonies, which finally die. Edge IoT (Internet of Things) devices capable of processing video streams in real-time, such as the one we propose, may allow for the monitoring of beehives for the presence of Varroa destructor. Additionally, centralization of monitoring in the Cloud data center enables the prevention of the spread of this disease and reduces bee mortality through monitoring entire apiaries. Although there are various IoT or non-IoT systems for bee-related issues, such comprehensive and technically advanced solutions for beekeeping and Varroa detection barely exist or perform mite detection after sending the data to the data center. The latter, in turn, increases communication and storage needs, which we try to limit in our approach. In the paper, we show an innovative Edge-based IoT solution for Varroa destructor detection. The solution relies on Tensor Processing Unit (TPU) acceleration for machine learning-based models pre-trained in the hybrid Cloud environment for bee identification and Varroa destructor infection detection. Our experiments were performed in order to investigate the effectiveness and the time performance of both steps, and the study of the impact of the image resolution on the quality of detection and classification processes prove that we can effectively detect the presence of varroosis in beehives in real-time with the use of Edge artificial intelligence invoked for the analysis of video streams.

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6349
Author(s):  
Jawad Ahmad ◽  
Johan Sidén ◽  
Henrik Andersson

This paper presents a posture recognition system aimed at detecting sitting postures of a wheelchair user. The main goals of the proposed system are to identify and inform irregular and improper posture to prevent sitting-related health issues such as pressure ulcers, with the potential that it could also be used for individuals without mobility issues. In the proposed monitoring system, an array of 16 screen printed pressure sensor units was employed to obtain pressure data, which are sampled and processed in real-time using read-out electronics. The posture recognition was performed for four sitting positions: right-, left-, forward- and backward leaning based on k-nearest neighbors (k-NN), support vector machines (SVM), random forest (RF), decision tree (DT) and LightGBM machine learning algorithms. As a result, a posture classification accuracy of up to 99.03 percent can be achieved. Experimental studies illustrate that the system can provide real-time pressure distribution value in the form of a pressure map on a standard PC and also on a raspberry pi system equipped with a touchscreen monitor. The stored pressure distribution data can later be shared with healthcare professionals so that abnormalities in sitting patterns can be identified by employing a post-processing unit. The proposed system could be used for risk assessments related to pressure ulcers. It may be served as a benchmark by recording and identifying individuals’ sitting patterns and the possibility of being realized as a lightweight portable health monitoring device.


Author(s):  
Nicholas Westing ◽  
Brett Borghetti ◽  
Kevin Gross

The increasing spatial and spectral resolution of hyperspectral imagers yields detailed spectroscopy measurements from both space-based and airborne platforms. Machine learning algorithms have achieved state-of-the-art material classification performance on benchmark hyperspectral data sets; however, these techniques often do not consider varying atmospheric conditions experienced in a real-world detection scenario. To reduce the impact of atmospheric effects in the at-sensor signal, atmospheric compensation must be performed. Radiative Transfer (RT) modeling can generate high-fidelity atmospheric estimates at detailed spectral resolutions, but is often too time-consuming for real-time detection scenarios. This research utilizes machine learning methods to perform dimension reduction on the transmittance, upwelling radiance, and downwelling radiance (TUD) data to create high accuracy atmospheric estimates with lower computational cost than RT modeling. The utility of this approach is investigated using the instrument line shape for the Mako long-wave infrared hyperspectral sensor. This study employs physics-based metrics and loss functions to identify promising dimension reduction techniques. As a result, TUD vectors can be produced in real-time allowing for atmospheric compensation across diverse remote sensing scenarios.


Author(s):  
Ming-Chuan Chiu ◽  
Chien-De Tsai ◽  
Tung-Lung Li

Abstract A cyber-physical system (CPS) is one of the key technologies of industry 4.0. It is an integrated system that merges computing, sensors, and actuators, controlled by computer-based algorithms that integrate people and cyberspace. However, CPS performance is limited by its computational complexity. Finding a way to implement CPS with reduced complexity while incorporating more efficient diagnostics, forecasting, and equipment health management in a real-time performance remains a challenge. Therefore, the study proposes an integrative machine-learning method to reduce the computational complexity and to improve the applicability as a virtual subsystem in the CPS environment. This study utilizes random forest (RF) and a time-series deep-learning model based on the long short-term memory (LSTM) networking to achieve real-time monitoring and to enable the faster corrective adjustment of machines. We propose a method in which a fault detection alarm is triggered well before a machine fails, enabling shop-floor engineers to adjust its parameters or perform maintenance to mitigate the impact of its shutdown. As demonstrated in two empirical studies, the proposed method outperforms other times-series techniques. Accuracy reaches 80% or higher 3 h prior to real-time shutdown in the first case, and a significant improvement in the life of the product (281%) during a particular process appears in the second case. The proposed method can be applied to other complex systems to boost the efficiency of machine utilization and productivity.


2019 ◽  
Vol 11 (14) ◽  
pp. 3822 ◽  
Author(s):  
Fahad Alrukaibi ◽  
Rushdi Alsaleh ◽  
Tarek Sayed

The objective of this study is to estimate the real time travel times on urban networks that are partially covered by moving sensors. The study proposes two machine learning approaches; the random forest (RF) model and the multi-layer feed forward neural network (MFFN) to estimate travel times on urban networks which are partially covered by moving sensors. A MFFN network with three hidden layers was developed and trained using the back-propagation learning algorithm, and the neural weights were optimized using the Levenberg–Marquardt optimization technique. A case study of an urban network with 100 links is considered in this study. The performance of the proposed models was compared to a statistical model, which uses the empirical Bayes (EB) method and the spatial correlation between travel times. The models’ performances were evaluated using data generated from VISSIM microsimulation model. Results show that the machine learning algorithms, e.g., RF and ANN, achieve average improvements of about 4.1% and 2.9% compared with the statistical approach. The RF, MFFN, and the statistical approach models correctly predict real time travel times with estimation accuracies reaching 90.7%, 89.5%, and 86.6% respectively. Moreover, results show that at low moving sensor penetration rate, the RF and MFFN achieve higher estimation accuracy compared with the statistical approach. At probe penetration rate of 1%, the RF, MFFN, and the statistical approach models correctly predict real time travel times with estimation accuracy of 85.6%, 84.4%, and 80.9% respectively. Furthermore, the study investigated the impact of the probe penetration rate on real time neighbor links coverage. Results show that at probe penetration rates of 1%, 3%, and 5%, the models cover the estimation of real time travel times on 73.8%, 94.8%, and 97.2% of the estimation intervals.


Author(s):  
Hamid Reza Faragardi ◽  
Saeid Dehnavi ◽  
Thomas Nolte ◽  
Mehdi Kargahi ◽  
Thomas Fahringer

Designs ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 9 ◽  
Author(s):  
Michael M. Gichane ◽  
Jean B. Byiringiro ◽  
Andrew K. Chesang ◽  
Peterson M. Nyaga ◽  
Rogers K. Langat ◽  
...  

As Digital Twins gain more traction and their adoption in industry increases, there is a need to integrate such technology with machine learning features to enhance functionality and enable decision making tasks. This has lead to the emergence of a concept known as Digital Triplet; an enhancement of Digital Twin technology through the addition of an ’intelligent activity layer’. This is a relatively new technology in Industrie 4.0 and research efforts are geared towards exploring its applicability, development and testing of means for implementation and quick adoption. This paper presents the design and implementation of a Digital Triplet for a three-floor elevator system. It demonstrates the integration of a machine learning (ML) object detection model and the system Digital Twin. This was done to introduce an additional security feature that enabled the system to make a decision, based on objects detected and take preliminary security measures. The virtual model was designed in Siemens NX and programmed via Total Integrated Automation (TIA) portal software. The corresponding physical model was fabricated and controlled using a Programmable Logic Controller (PLC) S7 1200. A control program was developed to mimic the general operations of a typical elevator system used in a commercial building setting. Communication, between the physical and virtual models, was enabled using the OPC-Unified Architecture (OPC-UA) protocol. Object recognition using “You only look once” (YOLOV3) based machine learning algorithm was incorporated. The Digital Triplet’s functionality was tested, ensuring the virtual system duplicated actual operations of the physical counterpart through the use of sensor data. Performance testing was done to determine the impact of the ML module on the real-time functionality aspect of the system. Experiment results showed the object recognition contributed an average of 1.083 s to an overall signal travel time of 1.338 s.


Author(s):  
Athanasios Theofilatos ◽  
Cong Chen ◽  
Constantinos Antoniou

Although there are numerous studies examining the impact of real-time traffic and weather parameters on crash occurrence on freeways, to the best of the authors’ knowledge there are no studies which have compared the prediction performances of machine learning (ML) and deep learning (DL) models. The present study adds to current knowledge by comparing and validating ML and DL methods to predict real-time crash occurrence. To achieve this, real-time traffic and weather data from Attica Tollway in Greece were linked with historical crash data. The total data set was split into training/estimation (75%) and validation (25%) subsets, which were then standardized. First, the ML and DL prediction models were trained/estimated using the training data set. Afterwards, the models were compared on the basis of their performance metrics (accuracy, sensitivity, specificity, and area under curve, or AUC) on the test set. The models considered were k-nearest neighbor, Naïve Bayes, decision tree, random forest, support vector machine, shallow neural network, and, lastly, deep neural network. Overall, the DL model seems to be more appropriate, because it outperformed all other candidate models. More specifically, the DL model managed to achieve a balanced performance among all metrics compared with other models (total accuracy = 68.95%, sensitivity = 0.521, specificity = 0.77, AUC = 0.641). It is surprising though that the Naïve Bayes model achieved a good performance despite being far less complex than other models. The study findings are particularly useful, because they provide a first insight into performance of ML and DL models.


Sign in / Sign up

Export Citation Format

Share Document