Automatic speaker verification system using three dimensional static and contextual variation-based features with two dimensional convolutional neural network

2021 ◽  
Vol 6 (2) ◽  
pp. 143
Author(s):  
Aakshi Mittal ◽  
Mohit Dua
2020 ◽  
Vol 19 (6) ◽  
pp. 1884-1893
Author(s):  
Shekhroz Khudoyarov ◽  
Namgyu Kim ◽  
Jong-Jae Lee

Ground-penetrating radar is a typical sensor system for analyzing underground facilities such as pipelines and rebars. The technique also can be used to detect an underground cavity, which is a potential sign of urban sinkholes. Multichannel ground-penetrating radar devices are widely used to detect underground cavities thanks to the capacity of informative three-dimensional data. Nevertheless, the three-dimensional ground-penetrating radar data interpretation is unclear and complicated when recognizing underground cavities because similar ground-penetrating radar data reflected from different underground objects are often mixed with the cavities. As it is prevalently known that the deep learning algorithm-based techniques are powerful at image classification, deep learning-based techniques for underground object detection techniques using two-dimensional GPR (ground-penetrating radar) radargrams have been researched upon in recent years. However, spatial information of underground objects can be characterized better in three-dimensional ground-penetrating radar voxel data than in two-dimensional ground-penetrating radar images. Therefore, in this study, a novel underground object classification technique is proposed by applying deep three-dimensional convolutional neural network on three-dimensional ground-penetrating radar data. First, a deep convolutional neural network architecture was developed using three-dimensional convolutional networks for recognizing spatial underground objects such as, pipe, cavity, manhole, and subsoil. The framework of applying the three-dimensional convolutional neural network into three-dimensional ground-penetrating radar data was then proposed and experimentally validated using real three-dimensional ground-penetrating radar data. In order to do that, three-dimensional ground-penetrating radar block data were used to train the developed three-dimensional convolutional neural network and to classify unclassified three-dimensional ground-penetrating radar data collected from urban roads in Seoul, South Korea. The validation results revealed that four underground objects (pipe, cavity, manhole, and subsoil) are successfully classified, and the average classification accuracy was 97%. In addition, a false alarm was rarely indicated.


2021 ◽  
pp. 147592172198940
Author(s):  
Hyung Jin Lim ◽  
Soonkyu Hwang ◽  
Hyeonjin Kim ◽  
Hoon Sohn

In this study, a faster region-based convolutional neural network is constructed and applied to the combined vision and thermographic images for automated detection and classification of surface and subsurface corrosion in steel bridges. First, a hybrid imaging system is developed for the seamless integration of vision and infrared images. Herein, a three-dimensional red/green/blue vision image is obtained with a vision camera, and a one-dimensional active infrared (IR) amplitude image is obtained from the infrared camera for temperature measurements with halogen lamps as the heat source. Subsequently, the three-dimensional red/green/blue vision image is converted to a two-dimensional chroma blue- and red-difference (CbCr) image because the CbCr image is known to be more sensitive to surface corrosion than the red/green/blue image. A combined three-dimensional (CbCr-IR) image is then constructed by fusing the two-dimensional CbCr image and the one-dimensional infrared image. For the automated corrosion detection and classification, a faster region-based convolutional neural network is constructed and trained using the combined three-dimensional CbCr-IR images of surface and subsurface corrosion on steel bridge structures. Finally, the performance of the trained, faster region-based convolutional neural network is evaluated using the images acquired from real bridges and compared with faster region-based convolutional neural networks trained by other vision and IR-based images. The uniqueness of this study is attributed to the (1) corrosion detection reliability improvements based on the fusion of vision and infrared images, (2) automated corrosion detection and classification with a faster region-based convolutional neural network, (3) detection of subsurface corrosion that is not detectable using vision images only, and (4) application to field bridge inspection.


2021 ◽  
pp. 1-10
Author(s):  
Chien-Cheng Leea ◽  
Zhongjian Gao ◽  
Xiu-Chi Huanga

This paper proposes a Wi-Fi-based indoor human detection system using a deep convolutional neural network. The system detects different human states in various situations, including different environments and propagation paths. The main improvements proposed by the system is that there is no cameras overhead and no sensors are mounted. This system captures useful amplitude information from the channel state information and converts this information into an image-like two-dimensional matrix. Next, the two-dimensional matrix is used as an input to a deep convolutional neural network (CNN) to distinguish human states. In this work, a deep residual network (ResNet) architecture is used to perform human state classification with hierarchical topological feature extraction. Several combinations of datasets for different environments and propagation paths are used in this study. ResNet’s powerful inference simplifies feature extraction and improves the accuracy of human state classification. The experimental results show that the fine-tuned ResNet-18 model has good performance in indoor human detection, including people not present, people still, and people moving. Compared with traditional machine learning using handcrafted features, this method is simple and effective.


2020 ◽  
pp. 1-1
Author(s):  
Jinlu Shen ◽  
Benjamin J. Belzer ◽  
Krishnamoorthy Sivakumar ◽  
Kheong Sann Chan ◽  
Ashish James

Sign in / Sign up

Export Citation Format

Share Document