A Review of Big Data Digital Forensic Analysis in Advanced Metering Infrastructure

2018 ◽  
Vol 24 (3) ◽  
pp. 1603-1607
Author(s):  
Zul-Azri Ibrahim ◽  
Fiza Abdul Rahim ◽  
Roslan Ismail ◽  
Asmidar Abu Bakar
Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5650
Author(s):  
Jenniffer S. Guerrero-Prado ◽  
Wilfredo Alfonso-Morales ◽  
Eduardo F. Caicedo-Bravo

The Advanced Metering Infrastructure (AMI) data represent a source of information in real time not only about electricity consumption but also as an indicator of other social, demographic, and economic dynamics within a city. This paper presents a Data Analytics/Big Data framework applied to AMI data as a tool to leverage the potential of this data within the applications in a Smart City. The framework includes three fundamental aspects. First, the architectural view places AMI within the Smart Grids Architecture Model-SGAM. Second, the methodological view describes the transformation of raw data into knowledge represented by the DIKW hierarchy and the NIST Big Data interoperability model. Finally, a binding element between the two views is represented by human expertise and skills to obtain a deeper understanding of the results and transform knowledge into wisdom. Our new view faces the challenges arriving in energy markets by adding a binding element that gives support for optimal and efficient decision-making. To show how our framework works, we developed a case study. The case implements each component of the framework for a load forecasting application in a Colombian Retail Electricity Provider (REP). The MAPE for some of the REP’s markets was less than 5%. In addition, the case shows the effect of the binding element as it raises new development alternatives and becomes a feedback mechanism for more assertive decision making.


Smart Cities ◽  
2021 ◽  
Vol 4 (1) ◽  
pp. 195-203
Author(s):  
Eric Garrison ◽  
Joshua New

While urban-scale building energy modeling is becoming increasingly common, it currently lacks standards, guidelines, or empirical validation against measured data. Empirical validation necessary to enable best practices is becoming increasingly tractable. The growing prevalence of advanced metering infrastructure has led to significant data regarding the energy consumption within individual buildings, but is something utilities and countries are still struggling to analyze and use wisely. In partnership with the Electric Power Board of Chattanooga, Tennessee, a crude OpenStudio/EnergyPlus model of over 178,000 buildings has been created and used to compare simulated energy against actual, 15-min, whole-building electrical consumption of each building. In this study, classifying building type is treated as a use case for quantifying performance associated with smart meter data. This article attempts to provide guidance for working with advanced metering infrastructure for buildings related to: quality control, pathological data classifications, statistical metrics on performance, a methodology for classifying building types, and assess accuracy. Advanced metering infrastructure was used to collect whole-building electricity consumption for 178,333 buildings, define equations for common data issues (missing values, zeros, and spiking), propose a new method for assigning building type, and empirically validate gaps between real buildings and existing prototypes using industry-standard accuracy metrics.


Data ◽  
2021 ◽  
Vol 6 (8) ◽  
pp. 87
Author(s):  
Sara Ferreira ◽  
Mário Antunes ◽  
Manuel E. Correia

Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.


2021 ◽  
pp. 1-1
Author(s):  
Wen Tian ◽  
Miao Du ◽  
Xiaopeng Ji ◽  
Guangjie Liu ◽  
Yuewei Dai ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document