Footstep detection in urban seismic data with a convolutional neural network

2020 ◽  
Vol 39 (9) ◽  
pp. 654-660 ◽  
Author(s):  
Srikanth Jakkampudi ◽  
Junzhu Shen ◽  
Weichen Li ◽  
Ayush Dev ◽  
Tieyuan Zhu ◽  
...  

Seismic data for studying the near surface have historically been extremely sparse in cities, limiting our ability to understand small-scale processes, locate small-scale geohazards, and develop earthquake hazard microzonation at the scale of buildings. In recent years, distributed acoustic sensing (DAS) technology has enabled the use of existing underground telecommunications fibers as dense seismic arrays, requiring little manual labor or energy to maintain. At the Fiber-Optic foR Environmental SEnsEing array under Pennsylvania State University, we detected weak slow-moving signals in pedestrian-only areas of campus. These signals were clear in the 1 to 5 Hz range. We verified that they were caused by footsteps. As part of a broader scheme to remove and obscure these footsteps in the data, we developed a convolutional neural network to detect them automatically. We created a data set of more than 4000 windows of data labeled with or without footsteps for this development process. We describe improvements to the data input and architecture, leading to approximately 84% accuracy on the test data. Performance of the network was better for individual walkers and worse when there were multiple walkers. We believe the privacy concerns of individual walkers are likely to be highest priority. Community buy-in will be required for these technologies to be deployed at a larger scale. Hence, we should continue to proactively develop the tools to ensure city residents are comfortable with all geophysical data that may be acquired.

Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA13-WA26 ◽  
Author(s):  
Jing Sun ◽  
Sigmund Slang ◽  
Thomas Elboth ◽  
Thomas Larsen Greiner ◽  
Steven McDonald ◽  
...  

For economic and efficiency reasons, blended acquisition of seismic data is becoming increasingly commonplace. Seismic deblending methods are computationally demanding and normally consist of multiple processing steps. Furthermore, the process of selecting parameters is not always trivial. Machine-learning-based processing has the potential to significantly reduce processing time and to change the way seismic deblending is carried out. We have developed a data-driven deep-learning-based method for fast and efficient seismic deblending. The blended data are sorted from the common-source to the common-channel domain to transform the character of the blending noise from coherent events to incoherent contributions. A convolutional neural network is designed according to the special characteristics of seismic data and performs deblending with results comparable to those obtained with conventional industry deblending algorithms. To ensure authenticity, the blending was performed numerically and only field seismic data were used, including more than 20,000 training examples. After training and validating the network, seismic deblending can be performed in near real time. Experiments also indicate that the initial signal-to-noise ratio is the major factor controlling the quality of the final deblended result. The network is also demonstrated to be robust and adaptive by using the trained model to first deblend a new data set from a different geologic area with a slightly different delay time setting and second to deblend shots with blending noise in the top part of the record.


2019 ◽  
Vol 7 (3) ◽  
pp. SE161-SE174 ◽  
Author(s):  
Reetam Biswas ◽  
Mrinal K. Sen ◽  
Vishal Das ◽  
Tapan Mukerji

An inversion algorithm is commonly used to estimate the elastic properties, such as P-wave velocity ([Formula: see text]), S-wave velocity ([Formula: see text]), and density ([Formula: see text]) of the earth’s subsurface. Generally, the seismic inversion problem is solved using one of the traditional optimization algorithms. These algorithms start with a given model and update the model at each iteration, following a physics-based rule. The algorithm is applied at each common depth point (CDP) independently to estimate the elastic parameters. Here, we have developed a technique using the convolutional neural network (CNN) to solve the same problem. We perform two critical steps to take advantage of the generalization capability of CNN and the physics to generate synthetic data for a meaningful representation of the subsurface. First, rather than using CNN as in a classification type of problem, which is the standard approach, we modified the CNN to solve a regression problem to estimate the elastic properties. Second, again unlike the conventional CNN, which is trained by supervised learning with predetermined label (elastic parameter) values, we use the physics of our forward problem to train the weights. There are two parts of the network: The first is the convolution network, which takes the input as seismic data to predict the elastic parameters, which is the desired intermediate result. In the second part of the network, we use wave-propagation physics and we use the output of the CNN to generate the predicted seismic data for comparison with the actual data and calculation of the error. This error between the true and predicted seismograms is then used to calculate gradients, and update the weights in the CNN. After the network is trained, only the first part of the network can be used to estimate elastic properties at remaining CDPs directly. We determine the application of physics-guided CNN on prestack and poststack inversion problems. To explain how the algorithm works, we examine it using a conventional CNN workflow without any physics guidance. We first implement the algorithm on a synthetic data set for prestack and poststack data and then apply it to a real data set from the Cana field. In all the training examples, we use a maximum of 20% of data. Our approach offers a distinct advantage over a conventional machine-learning approach in that we circumvent the need for labeled data sets for training.


2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


2021 ◽  
Vol 11 (9) ◽  
pp. 4292
Author(s):  
Mónica Y. Moreno-Revelo ◽  
Lorena Guachi-Guachi ◽  
Juan Bernardo Gómez-Mendoza ◽  
Javier Revelo-Fuelagán ◽  
Diego H. Peluffo-Ordóñez

Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2021 ◽  
Vol 9 (7) ◽  
pp. 755
Author(s):  
Kangkang Jin ◽  
Jian Xu ◽  
Zichen Wang ◽  
Can Lu ◽  
Long Fan ◽  
...  

Warm current has a strong impact on the melting of sea ice, so clarifying the current features plays a very important role in the Arctic sea ice coverage forecasting study field. Currently, Arctic acoustic tomography is the only feasible method for the large-range current measurement under the Arctic sea ice. Furthermore, affected by the high latitudes Coriolis force, small-scale variability greatly affects the accuracy of Arctic acoustic tomography. However, small-scale variability could not be measured by empirical parameters and resolved by Regularized Least Squares (RLS) in the inverse problem of Arctic acoustic tomography. In this paper, the convolutional neural network (CNN) is proposed to enhance the prediction accuracy in the Arctic, and especially, Gaussian noise is added to reflect the disturbance of the Arctic environment. First, we use the finite element method to build the background ocean model. Then, the deep learning CNN method constructs the non-linear mapping relationship between the acoustic data and the corresponding flow velocity. Finally, the simulation result shows that the deep learning convolutional neural network method being applied to Arctic acoustic tomography could achieve 45.87% accurate improvement than the common RLS method in the current inversion.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Hongbo Zhao

BACKGROUND: Convolution neural network is often superior to other similar algorithms in image classification. Convolution layer and sub-sampling layer have the function of extracting sample features, and the feature of sharing weights greatly reduces the training parameters of the network. OBJECTIVE: This paper describes the improved convolution neural network structure, including convolution layer, sub-sampling layer and full connection layer. This paper also introduces five kinds of diseases and normal eye images reflected by the blood filament of the eyeball “yan.mat” data set, convenient to use MATLAB software for calculation. METHODSL: In this paper, we improve the structure of the classical LeNet-5 convolutional neural network, and design a network structure with different convolution kernels, different sub-sampling methods and different classifiers, and use this structure to solve the problem of ocular bloodstream disease recognition. RESULTS: The experimental results show that the improved convolutional neural network structure is ideal for the recognition of eye blood silk data set, which shows that the convolution neural network has the characteristics of strong classification and strong robustness. The improved structure can classify the diseases reflected by eyeball bloodstain well.


Author(s):  
Fei Rong ◽  
Li Shasha ◽  
Xu Qingzheng ◽  
Liu Kun

The Station logo is a way for a TV station to claim copyright, which can realize the analysis and understanding of the video by the identification of the station logo, so as to ensure that the broadcasted TV signal will not be illegally interfered. In this paper, we design a station logo detection method based on Convolutional Neural Network by the characteristics of the station, such as small scale-to-height ratio change and relatively fixed position. Firstly, in order to realize the preprocessing and feature extraction of the station data, the video samples are collected, filtered, framed, labeled and processed. Then, the training sample data and the test sample data are divided proportionally to train the station detection model. Finally, the sample is tested to evaluate the effect of the training model in practice. The simulation experiments prove its validity.


Sign in / Sign up

Export Citation Format

Share Document