scholarly journals Measurements for non-intrusive load monitoring through machine learning approaches

ACTA IMEKO ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 90
Author(s):  
Giovanni Bucci ◽  
Fabrizio Ciancetta ◽  
Edoardo Fiorucci ◽  
Simone Mari ◽  
Andrea Fioravanti

The topic of non-intrusive load monitoring (NILM) has seen a significant increase in research interest over the past decade, which has led to a significant increase in the performance of these systems. Nowadays, NILM systems are used in numerous applications, in particular by energy companies that provide users with an advanced management service of different consumption. These systems are mainly based on artificial intelligence algorithms that allow the disaggregation of energy by processing the absorbed power signal over more or less long time intervals (generally from fractions of an hour up to 24 h). Less attention was paid to the search for solutions that allow non-intrusive monitoring of the load in (almost) real time, that is, systems that make it possible to determine the variations in loads in extremely short times (seconds or fractions of a second). This paper proposes possible approaches for non-intrusive load monitoring systems operating in real time, analysing them from the point of view of measurement. The measurement and post-processing techniques used are illustrated and the results discussed. In addition, the work discusses the use of the results obtained to train machine learning algorithms that allow you to convert the measurement results into useful information for the user.

Author(s):  
E. B. Priyanka ◽  
S. Thangavel ◽  
D. Venkatesa Prabu

Big data and analytics may be new to some industries, but the oil and gas industry has long dealt with large quantities of data to make technical decisions. Oil producers can capture more detailed data in real-time at lower costs and from previously inaccessible areas, to improve oilfield and plant performance. Stream computing is a new way of analyzing high-frequency data for real-time complex-event-processing and scoring data against a physics-based or empirical model for predictive analytics, without having to store the data. Hadoop Map/Reduce and other NoSQL approaches are a new way of analyzing massive volumes of data used to support the reservoir, production, and facilities engineering. Hence, this chapter enumerates the routing organization of IoT with smart applications aggregating real-time oil pipeline sensor data as big data subjected to machine learning algorithms using the Hadoop platform.


2020 ◽  
Vol 25 (40) ◽  
pp. 4296-4302 ◽  
Author(s):  
Yuan Zhang ◽  
Zhenyan Han ◽  
Qian Gao ◽  
Xiaoyi Bai ◽  
Chi Zhang ◽  
...  

Background: β thalassemia is a common monogenic genetic disease that is very harmful to human health. The disease arises is due to the deletion of or defects in β-globin, which reduces synthesis of the β-globin chain, resulting in a relatively excess number of α-chains. The formation of inclusion bodies deposited on the cell membrane causes a decrease in the ability of red blood cells to deform and a group of hereditary haemolytic diseases caused by massive destruction in the spleen. Methods: In this work, machine learning algorithms were employed to build a prediction model for inhibitors against K562 based on 117 inhibitors and 190 non-inhibitors. Results: The overall accuracy (ACC) of a 10-fold cross-validation test and an independent set test using Adaboost were 83.1% and 78.0%, respectively, surpassing Bayes Net, Random Forest, Random Tree, C4.5, SVM, KNN and Bagging. Conclusion: This study indicated that Adaboost could be applied to build a learning model in the prediction of inhibitors against K526 cells.


Author(s):  
Magdalena Kukla-Bartoszek ◽  
Paweł Teisseyre ◽  
Ewelina Pośpiech ◽  
Joanna Karłowska-Pik ◽  
Piotr Zieliński ◽  
...  

AbstractIncreasing understanding of human genome variability allows for better use of the predictive potential of DNA. An obvious direct application is the prediction of the physical phenotypes. Significant success has been achieved, especially in predicting pigmentation characteristics, but the inference of some phenotypes is still challenging. In search of further improvements in predicting human eye colour, we conducted whole-exome (enriched in regulome) sequencing of 150 Polish samples to discover new markers. For this, we adopted quantitative characterization of eye colour phenotypes using high-resolution photographic images of the iris in combination with DIAT software analysis. An independent set of 849 samples was used for subsequent predictive modelling. Newly identified candidates and 114 additional literature-based selected SNPs, previously associated with pigmentation, and advanced machine learning algorithms were used. Whole-exome sequencing analysis found 27 previously unreported candidate SNP markers for eye colour. The highest overall prediction accuracies were achieved with LASSO-regularized and BIC-based selected regression models. A new candidate variant, rs2253104, located in the ARFIP2 gene and identified with the HyperLasso method, revealed predictive potential and was included in the best-performing regression models. Advanced machine learning approaches showed a significant increase in sensitivity of intermediate eye colour prediction (up to 39%) compared to 0% obtained for the original IrisPlex model. We identified a new potential predictor of eye colour and evaluated several widely used advanced machine learning algorithms in predictive analysis of this trait. Our results provide useful hints for developing future predictive models for eye colour in forensic and anthropological studies.


2019 ◽  
Vol 9 (6) ◽  
pp. 1154 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Bohan Yoon ◽  
Jongtae Rhee

Radio frequency identification (RFID) is an automated identification technology that can be utilized to monitor product movements within a supply chain in real-time. However, one problem that occurs during RFID data capturing is false positives (i.e., tags that are accidentally detected by the reader but not of interest to the business process). This paper investigates using machine learning algorithms to filter false positives. Raw RFID data were collected based on various tagged product movements, and statistical features were extracted from the received signal strength derived from the raw RFID data. Abnormal RFID data or outliers may arise in real cases. Therefore, we utilized outlier detection models to remove outlier data. The experiment results showed that machine learning-based models successfully classified RFID readings with high accuracy, and integrating outlier detection with machine learning models improved classification accuracy. We demonstrated the proposed classification model could be applied to real-time monitoring, ensuring false positives were filtered and hence not stored in the database. The proposed model is expected to improve warehouse management systems by monitoring delivered products to other supply chain partners.


2021 ◽  
Vol 35 (1) ◽  
pp. 11-21
Author(s):  
Himani Tyagi ◽  
Rajendra Kumar

IoT is characterized by communication between things (devices) that constantly share data, analyze, and make decisions while connected to the internet. This interconnected architecture is attracting cyber criminals to expose the IoT system to failure. Therefore, it becomes imperative to develop a system that can accurately and automatically detect anomalies and attacks occurring in IoT networks. Therefore, in this paper, an Intrsuion Detection System (IDS) based on extracted novel feature set synthesizing BoT-IoT dataset is developed that can swiftly, accurately and automatically differentiate benign and malicious traffic. Instead of using available feature reduction techniques like PCA that can change the core meaning of variables, a unique feature set consisting of only seven lightweight features is developed that is also IoT specific and attack traffic independent. Also, the results shown in the study demonstrates the effectiveness of fabricated seven features in detecting four wide variety of attacks namely DDoS, DoS, Reconnaissance, and Information Theft. Furthermore, this study also proves the applicability and efficiency of supervised machine learning algorithms (KNN, LR, SVM, MLP, DT, RF) in IoT security. The performance of the proposed system is validated using performance Metrics like accuracy, precision, recall, F-Score and ROC. Though the accuracy of Decision Tree (99.9%) and Randon Forest (99.9%) Classifiers are same but other metrics like training and testing time shows Random Forest comparatively better.


Sign in / Sign up

Export Citation Format

Share Document