scholarly journals Deep Learning Based User Safety Profiling Using User Feature Information Modeling

2021 ◽  
Vol 17 (2) ◽  
pp. 143-150
Author(s):  
Kye-Kyung Kim
2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yinghao Chu ◽  
Chen Huang ◽  
Xiaodan Xie ◽  
Bohai Tan ◽  
Shyam Kamal ◽  
...  

This study proposes a multilayer hybrid deep-learning system (MHS) to automatically sort waste disposed of by individuals in the urban public area. This system deploys a high-resolution camera to capture waste image and sensors to detect other useful feature information. The MHS uses a CNN-based algorithm to extract image features and a multilayer perceptrons (MLP) method to consolidate image features and other feature information to classify wastes as recyclable or the others. The MHS is trained and validated against the manually labelled items, achieving overall classification accuracy higher than 90% under two different testing scenarios, which significantly outperforms a reference CNN-based method relying on image-only inputs.


Entropy ◽  
2022 ◽  
Vol 24 (1) ◽  
pp. 128
Author(s):  
Zhenwei Guan ◽  
Feng Min ◽  
Wei He ◽  
Wenhua Fang ◽  
Tao Lu

Forest fire detection from videos or images is vital to forest firefighting. Most deep learning based approaches rely on converging image loss, which ignores the content from different fire scenes. In fact, complex content of images always has higher entropy. From this perspective, we propose a novel feature entropy guided neural network for forest fire detection, which is used to balance the content complexity of different training samples. Specifically, a larger weight is given to the feature of the sample with a high entropy source when calculating the classification loss. In addition, we also propose a color attention neural network, which mainly consists of several repeated multiple-blocks of color-attention modules (MCM). Each MCM module can extract the color feature information of fire adequately. The experimental results show that the performance of our proposed method outperforms the state-of-the-art methods.


Author(s):  
V. Palma

<p><strong>Abstract.</strong> In recent years, the diffusion of large image datasets and an unprecedented computational power have boosted the development of a class of artificial intelligence (AI) algorithms referred to as deep learning (DL). Among DL methods, convolutional neural networks (CNNs) have proven particularly effective in computer vision, finding applications in many disciplines. This paper introduces a project aimed at studying CNN techniques in the field of architectural heritage, a still to be developed research stream. The first steps and results in the development of a mobile app to recognize monuments are discussed. While AI is just beginning to interact with the built environment through mobile devices, heritage technologies have long been producing and exploring digital models and spatial archives. The interaction between DL algorithms and state-of-the-art information modeling is addressed, as an opportunity to both exploit heritage collections and optimize new object recognition techniques.</p>


2020 ◽  
Vol 12 (6) ◽  
pp. 1005 ◽  
Author(s):  
Roberto Pierdicca ◽  
Marina Paolanti ◽  
Francesca Matrone ◽  
Massimo Martini ◽  
Christian Morbidoni ◽  
...  

In the Digital Cultural Heritage (DCH) domain, the semantic segmentation of 3D Point Clouds with Deep Learning (DL) techniques can help to recognize historical architectural elements, at an adequate level of detail, and thus speed up the process of modeling of historical buildings for developing BIM models from survey data, referred to as HBIM (Historical Building Information Modeling). In this paper, we propose a DL framework for Point Cloud segmentation, which employs an improved DGCNN (Dynamic Graph Convolutional Neural Network) by adding meaningful features such as normal and colour. The approach has been applied to a newly collected DCH Dataset which is publicy available: ArCH (Architectural Cultural Heritage) Dataset. This dataset comprises 11 labeled points clouds, derived from the union of several single scans or from the integration of the latter with photogrammetric surveys. The involved scenes are both indoor and outdoor, with churches, chapels, cloisters, porticoes and loggias covered by a variety of vaults and beared by many different types of columns. They belong to different historical periods and different styles, in order to make the dataset the least possible uniform and homogeneous (in the repetition of the architectural elements) and the results as general as possible. The experiments yield high accuracy, demonstrating the effectiveness and suitability of the proposed approach.


2021 ◽  
Author(s):  
Wei Li ◽  
Georg Rümpker ◽  
Horst Stöcker ◽  
Megha Chakraborty ◽  
Darius Fener ◽  
...  

&lt;p&gt;This study presents a deep learning based algorithm for seismic event detection and simultaneous phase picking in seismic waveforms. U-net structure-based solutions which consists of a contracting path (encoder) to capture feature information and a symmetric expanding path (decoder) that enables precise localization, have proven to be effective in phase picking. The network architecture of these U-net models mainly comprise of 1D CNN, Bi- &amp; Uni-directional LSTM, transformers and self-attentive layers. Althought, these networks have proven to be a good solution, they may not fully harness the information extracted from multi-scales.&lt;/p&gt;&lt;p&gt;&amp;#160;In this study, we propose a simple yet powerful deep learning architecture by combining multi-class with attention mechanism, named MCA-Unet, for phase picking. &amp;#160;Specially, we treat the phase picking as an image segmentation problem, and incorporate the attention mechanism into the U-net structure to efficiently deal with the features extracted at different levels with the goal to improve the performance on the seismic phase picking. Our neural network is based on an encoder-decoder architecture composed of 1D convolutions, pooling layers, deconvolutions and multi-attention layers. This architecture is applied and tested to a field seismic dataset (e.g. Wenchuan Earthquake Aftershocks Classification Dataset) to check its performance.&lt;/p&gt;


Author(s):  
Chunyan Zeng ◽  
Dongliang Zhu ◽  
Zhifeng Wang ◽  
Minghu Wu ◽  
Wei Xiong ◽  
...  

AbstractDeep learning techniques have achieved specific results in recording device source identification. The recording device source features include spatial information and certain temporal information. However, most recording device source identification methods based on deep learning only use spatial representation learning from recording device source features, which cannot make full use of recording device source information. Therefore, in this paper, to fully explore the spatial information and temporal information of recording device source, we propose a new method for recording device source identification based on the fusion of spatial feature information and temporal feature information by using an end-to-end framework. From a feature perspective, we designed two kinds of networks to extract recording device source spatial and temporal information. Afterward, we use the attention mechanism to adaptively assign the weight of spatial information and temporal information to obtain fusion features. From a model perspective, our model uses an end-to-end framework to learn the deep representation from spatial feature and temporal feature and train using deep and shallow loss to joint optimize our network. This method is compared with our previous work and baseline system. The results show that the proposed method is better than our previous work and baseline system under general conditions.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 666
Author(s):  
Wenju Wang ◽  
Jiangwei Wang

Current research on the reconstruction of hyperspectral images from RGB images using deep learning mainly focuses on learning complex mappings through deeper and wider convolutional neural networks (CNNs). However, the reconstruction accuracy of the hyperspectral image is not high and among other issues the model for generating these images takes up too much storage space. In this study, we propose the double ghost convolution attention mechanism network (DGCAMN) framework for the reconstruction of a single RGB image to improve the accuracy of spectral reconstruction and reduce the storage occupied by the model. The proposed DGCAMN consists of a double ghost residual attention block (DGRAB) module and optimal nonlocal block (ONB). DGRAB module uses GhostNet and PRELU activation functions to reduce the calculation parameters of the data and reduce the storage size of the generative model. At the same time, the proposed double output feature Convolutional Block Attention Module (DOFCBAM) is used to capture the texture details on the feature map to maximize the content of the reconstructed hyperspectral image. In the proposed ONB, the Argmax activation function is used to obtain the region with the most abundant feature information and maximize the most useful feature parameters. This helps to improve the accuracy of spectral reconstruction. These contributions enable the DGCAMN framework to achieve the highest spectral accuracy with minimal storage consumption. The proposed method has been applied to the NTIRE 2020 dataset. Experimental results show that the proposed DGCAMN method outperforms the spectral accuracy reconstructed by advanced deep learning methods and greatly reduces storage consumption.


2020 ◽  
Author(s):  
Yu Zhao ◽  
Yue Yin ◽  
Guan Gui

Decentralized edge computing techniques have been attracted strongly attentions in many applications of intelligent internet of things (IIoT). Among these applications, intelligent edge surveillance (LEDS) techniques play a very important role to recognize object feature information automatically from surveillance video by virtue of edge computing together with image processing and computer vision. Traditional centralized surveillance techniques recognize objects at the cost of high latency, high cost and also require high occupied storage. In this paper, we propose a deep learning-based LEDS technique for a specific IIoT application. First, we introduce depthwise separable convolutional to build a lightweight neural network to reduce its computational cost. Second, we combine edge computing with cloud computing to reduce network traffic. Third, we apply the proposed LEDS technique into the practical construction site for the validation of a specific IIoT application. The detection speed of our proposed lightweight neural network reaches 16 frames per second in edge devices. After cloud server fine detection, the precision of the detection reaches 89\%. In addition, the operating cost at the edge device is only one-tenth of that of the centralized server.


Sign in / Sign up

Export Citation Format

Share Document