scholarly journals A Novel Electricity Theft Detection Scheme Based on Text Convolutional Neural Networks

Energies ◽  
2020 ◽  
Vol 13 (21) ◽  
pp. 5758
Author(s):  
Xiaofeng Feng ◽  
Hengyu Hui ◽  
Ziyang Liang ◽  
Wenchong Guo ◽  
Huakun Que ◽  
...  

Electricity theft decreases electricity revenues and brings risks to power usage’s safety, which has been increasingly challenging nowadays. As the mainstream in the relevant studies, the state-of-the-art data-driven approaches mainly detect electricity theft events from the perspective of the correlations between different daily or weekly loads, which is relatively inadequate to extract features from hours or more of fine-grained temporal data. In view of the above deficiencies, we propose a novel electricity theft detection scheme based on text convolutional neural networks (TextCNN). Specifically, we convert electricity consumption measurements over a horizon of interest into a two-dimensional time-series containing the intraday electricity features. Based on the data structure, the proposed method can accurately capture various periodical features of electricity consumption. Moreover, a data augmentation method is proposed to cope with the imbalance of electricity theft data. Extensive experimental results based on realistic Chinese and Irish datasets indicate that the proposed model achieves a better performance compared with other existing methods.

Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


2022 ◽  
pp. 1-10
Author(s):  
Daniel Trevino-Sanchez ◽  
Vicente Alarcon-Aquino

The need to detect and classify objects correctly is a constant challenge, being able to recognize them at different scales and scenarios, sometimes cropped or badly lit is not an easy task. Convolutional neural networks (CNN) have become a widely applied technique since they are completely trainable and suitable to extract features. However, the growing number of convolutional neural networks applications constantly pushes their accuracy improvement. Initially, those improvements involved the use of large datasets, augmentation techniques, and complex algorithms. These methods may have a high computational cost. Nevertheless, feature extraction is known to be the heart of the problem. As a result, other approaches combine different technologies to extract better features to improve the accuracy without the need of more powerful hardware resources. In this paper, we propose a hybrid pooling method that incorporates multiresolution analysis within the CNN layers to reduce the feature map size without losing details. To prevent relevant information from losing during the downsampling process an existing pooling method is combined with wavelet transform technique, keeping those details "alive" and enriching other stages of the CNN. Achieving better quality characteristics improves CNN accuracy. To validate this study, ten pooling methods, including the proposed model, are tested using four benchmark datasets. The results are compared with four of the evaluated methods, which are also considered as the state-of-the-art.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Shiyang Yan ◽  
Yizhang Xia ◽  
Jeremy S. Smith ◽  
Wenjin Lu ◽  
Bailing Zhang

Unconstrained hand detection in still images plays an important role in many hand-related vision problems, for example, hand tracking, gesture analysis, human action recognition and human-machine interaction, and sign language recognition. Although hand detection has been extensively studied for decades, it is still a challenging task with many problems to be tackled. The contributing factors for this complexity include heavy occlusion, low resolution, varying illumination conditions, different hand gestures, and the complex interactions between hands and objects or other hands. In this paper, we propose a multiscale deep learning model for unconstrained hand detection in still images. Deep learning models, and deep convolutional neural networks (CNNs) in particular, have achieved state-of-the-art performances in many vision benchmarks. Developed from the region-based CNN (R-CNN) model, we propose a hand detection scheme based on candidate regions generated by a generic region proposal algorithm, followed by multiscale information fusion from the popular VGG16 model. Two benchmark datasets were applied to validate the proposed method, namely, the Oxford Hand Detection Dataset and the VIVA Hand Detection Challenge. We achieved state-of-the-art results on the Oxford Hand Detection Dataset and had satisfactory performance in the VIVA Hand Detection Challenge.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Wenting Qiao ◽  
Hongwei Zhang ◽  
Fei Zhu ◽  
Qiande Wu

The traditional method for detecting cracks in concrete bridges has the disadvantages of low accuracy and weak robustness. Combined with the crack digital image data obtained from bending test of reinforced concrete beams, a crack identification method for concrete structures based on improved U-net convolutional neural networks is proposed to improve the accuracy of crack identification in this article. Firstly, a bending test of concrete beams is conducted to collect crack images. Secondly, datasets of crack images are obtained using the data augmentation technology. Selected cracks are marked. Thirdly, based on the U-net neural networks, an improved inception module and an Atrous Spatial Pyramid Pooling module are added in the improved U-net model. Finally, the widths of cracks are identified using the concrete crack binary images obtained from the improved U-net model. The average precision of the test set of the proposed model is 11.7% higher than that of the U-net neural network segmentation model. The average relative error of the crack width of the proposed model is 13.2%, which is 18.6% less than that measured by using the ACTIS system. The results indicate that the proposed method is accurate, robust, and suitable for crack identification in concrete structures.


Author(s):  
Chen Xin ◽  
Minh Nguyen ◽  
Wei Qi Yan

Identifying fire flames is based on object recognition which has valuable applications in intelligent surveillance. This chapter focuses on flame recognition using deep learning and its evaluations. For achieving this goal, authors design a Multi-Flame Detection scheme (MFD) which utilises Convolutional Neural Networks (CNNs). Authors take use of TensorFlow in deep learning with an NVIDIA GPU to train an image dataset and constructed a model for flame recognition. The contributions of this book chapter are: (1) data augmentation for flame recognition, (2) model construction for deep learning, and (3) result evaluations for flame recognition using deep learning.


2016 ◽  
Vol 10 (03) ◽  
pp. 379-397 ◽  
Author(s):  
Hilal Ergun ◽  
Yusuf Caglar Akyuz ◽  
Mustafa Sert ◽  
Jianquan Liu

Visual concept recognition is an active research field in the last decade. Related to this attention, deep learning architectures are showing great promise in various computer vision domains including image classification, object detection, event detection and action recognition in videos. In this study, we investigate various aspects of convolutional neural networks for visual concept recognition. We analyze recent studies and different network architectures both in terms of running time and accuracy. In our proposed visual concept recognition system, we first discuss various important properties of popular convolutional network architecture under consideration. Then we describe our method for feature extraction at different levels of abstraction. We present extensive empirical information along with best practices for big data practitioners. Using these best practices we propose efficient fusion mechanisms both for single and multiple network models. We present state-of-the-art results on benchmark datasets while keeping computational costs at low level. Our results show that these state-of-the-art results can be reached without using extensive data augmentation techniques.


2021 ◽  
Vol 24 (1) ◽  
Author(s):  
Facundo Manuel Quiroga

Neural networks are currently the state-of-the-art for many tasks.Invariance and same-equivariance are two fundamental properties to characterize how a model reacts to transformation: equivariance is the generalization of both. Equivariance to transformations of the inputs can be necessary properties of the network for certain tasks. Data augmentation and specially designed layers provide a way for these properties to be learned by networks. However, the mechanisms by which networks encode them is not well understood.We propose several transformational measures to quantify the invariance and same-equivariance of individual activations of a network. Analysis of these results can yield insights into the encoding and distribution of invariance in all layers of a network. The measures are simple to understand and efficient to run, and have been implemented in an open-source library. We perform experiments to validate the measures and understand their properties, showing their stability and effectiveness. Afterwards, we use the measures to characterize common network architectures in terms of these properties, using affine transformations. Our results show, for example, that the distribution of invariance across the layers of a network has well a defined structure that is dependent only on the network design and not on the training process.


Information ◽  
2021 ◽  
Vol 12 (12) ◽  
pp. 495
Author(s):  
Yan Zhang ◽  
Shupeng He ◽  
Shiyun Wa ◽  
Zhiqi Zong ◽  
Yunling Liu

Apple flower detection is an important project in the apple planting stage. This paper proposes an optimized detection network model based on a generative module and pruning inference. Due to the problems of instability, non-convergence, and overfitting of convolutional neural networks in the case of insufficient samples, this paper uses a generative module and various image pre-processing methods including Cutout, CutMix, Mixup, SnapMix, and Mosaic algorithms for data augmentation. In order to solve the problem of slowing down the training and inference due to the increasing complexity of detection networks, the pruning inference proposed in this paper can automatically deactivate part of the network structure according to the different conditions, reduce the network parameters and operations, and significantly improve the network speed. The proposed model can achieve 90.01%, 98.79%, and 97.43% in precision, recall, and mAP, respectively, in detecting the apple flowers, and the inference speed can reach 29 FPS. On the YOLO-v5 model with slightly lower performance, the inference speed can reach 71 FPS by the pruning inference. These experimental results demonstrate that the model proposed in this paper can meet the needs of agricultural production.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1343 ◽  
Author(s):  
Akmaljon Palvanov ◽  
Young Cho

Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relation to visibility estimation under various foggy weather conditions. We propose VisNet, which is a new approach based on deep integrated convolutional neural networks for the estimation of visibility distances from camera imagery. The implemented network uses three streams of deep integrated convolutional neural networks, which are connected in parallel. In addition, we have collected the largest dataset with three million outdoor images and exact visibility values for this study. To evaluate the model’s performance fairly and objectively, the model is trained on three image datasets with different visibility ranges, each with a different number of classes. Moreover, our proposed model, VisNet, evaluated under dissimilar fog density scenarios, uses a diverse set of images. Prior to feeding the network, each input image is filtered in the frequency domain to remove low-level features, and a spectral filter is applied to each input for the extraction of low-contrast regions. Compared to the previous methods, our approach achieves the highest performance in terms of classification based on three different datasets. Furthermore, our VisNet considerably outperforms not only the classical methods, but also state-of-the-art models of visibility estimation.


Author(s):  
Zhenguo Yan ◽  
◽  
Yue Wu

Convolutional Neural Networks (CNNs) effectively extract local features from input data. However, CNN based on word embedding and convolution layers displays poor performance in text classification tasks when compared with traditional baseline methods. We address this problem and propose a model named NNGN that simplifies the convolution layer in the CNN by replacing it with a pooling layer that extracts n-gram embedding in a simpler way and obtains document representations via linear computation. We implement two settings in our model to extract n-gram features. In the first setting, which we refer to as seq-NNGN, we consider word order within each n-gram. In the second setting, BoW-NNGN, we do not consider word order. We compare the performance of these settings in different classification tasks with those of other models. The experimental results show that our proposed model achieves better performance than state-of-the-art models.


Sign in / Sign up

Export Citation Format

Share Document