Leveraging coherent wave field analysis and deep learning in fiber-optic seismology

Author(s):  
Benjamin Schwarz ◽  
Korbinian Sager ◽  
Philippe Jousset ◽  
Gilda Currenti ◽  
Charlotte Krawczyk ◽  
...  

<p><span>Fiber-optic cables form an integral part of modern telecommunications infrastructure and are ubiquitous in particular in regions where dedicated seismic instrumentation is traditionally sparse or lacking entirely. Fiber-optic seismology promises to enable affordable and time-extended observations of earth and environmental processes at an unprecedented temporal and spatial resolution. The method’s unique potential for combined large-N and large-T observations implies intriguing opportunities but also significant challenges in terms of data storage, data handling and computation.</span></p><p><span>Our goal is to enable real-time data enhancement, rapid signal detection and wave field characterization without the need for time-demanding user interaction. We therefore combine coherent wave field analysis, an optics-inspired processing framework developed in controlled-source seismology, with state-of-the-art deep convolutional neural network (CNN) architectures commonly used in visual perception. While conventional deep learning strategies have to rely on manually labeled or purely synthetic training datasets, coherent wave field analysis labels field data based on physical principles and enables large-scale and purely data-driven training of the CNN models. The shear amount of data already recorded in various settings makes artificial data generation by numerical modeling superfluous – a task that is often constrained by incomplete knowledge of the embedding medium and an insufficient description of processes at or close to the surface, which are challenging to capture in integrated simulations.</span></p><p><span>Applications to extensive field datasets acquired with dark-fiber infrastructure at a geothermal field in SW Iceland and in a town at the flank of Mt Etna, Italy, reveal that the suggested framework generalizes well across different observational scales and environments, and sheds new light on the origin of a broad range of physically distinct wave fields that can be sensed with fiber-optic technology. Owing to the real-time applicability with affordable computing infrastructure, our analysis lends itself well to rapid on-the-fly data enhancement, wave field separation and compression strategies, thereby promising to have a positive impact on the full processing chain currently in use in fiber-optic seismology.</span></p>

Processes ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. 649
Author(s):  
Yifeng Liu ◽  
Wei Zhang ◽  
Wenhao Du

Deep learning based on a large number of high-quality data plays an important role in many industries. However, deep learning is hard to directly embed in the real-time system, because the data accumulation of the system depends on real-time acquisitions. However, the analysis tasks of such systems need to be carried out in real time, which makes it impossible to complete the analysis tasks by accumulating data for a long time. In order to solve the problems of high-quality data accumulation, high timeliness of the data analysis, and difficulty in embedding deep-learning algorithms directly in real-time systems, this paper proposes a new progressive deep-learning framework and conducts experiments on image recognition. The experimental results show that the proposed framework is effective and performs well and can reach a conclusion similar to the deep-learning framework based on large-scale data.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1932
Author(s):  
Malik Haris ◽  
Adam Glowacz

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.


Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 298 ◽  
Author(s):  
Shenshen Gu ◽  
Yue Yang

The Max-cut problem is a well-known combinatorial optimization problem, which has many real-world applications. However, the problem has been proven to be non-deterministic polynomial-hard (NP-hard), which means that exact solution algorithms are not suitable for large-scale situations, as it is too time-consuming to obtain a solution. Therefore, designing heuristic algorithms is a promising but challenging direction to effectively solve large-scale Max-cut problems. For this reason, we propose a unique method which combines a pointer network and two deep learning strategies (supervised learning and reinforcement learning) in this paper, in order to address this challenge. A pointer network is a sequence-to-sequence deep neural network, which can extract data features in a purely data-driven way to discover the hidden laws behind data. Combining the characteristics of the Max-cut problem, we designed the input and output mechanisms of the pointer network model, and we used supervised learning and reinforcement learning to train the model to evaluate the model performance. Through experiments, we illustrated that our model can be well applied to solve large-scale Max-cut problems. Our experimental results also revealed that the new method will further encourage broader exploration of deep neural network for large-scale combinatorial optimization problems.


Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1664
Author(s):  
Yoon-Ki Kim ◽  
Yongsung Kim

Recently, as the amount of real-time video streaming data has increased, distributed parallel processing systems have rapidly evolved to process large-scale data. In addition, with an increase in the scale of computing resources constituting the distributed parallel processing system, the orchestration of technology has become crucial for proper management of computing resources, in terms of allocating computing resources, setting up a programming environment, and deploying user applications. In this paper, we present a new distributed parallel processing platform for real-time large-scale image processing based on deep learning model inference, called DiPLIP. It provides a scheme for large-scale real-time image inference using buffer layer and a scalable parallel processing environment according to the size of the stream image. It allows users to easily process trained deep learning models for processing real-time images in a distributed parallel processing environment at high speeds, through the distribution of the virtual machine container.


2021 ◽  
Author(s):  
James Ramsay ◽  
Lilia Noble ◽  
Glynn Lockyer ◽  
Mohand Alyan ◽  
Ahmed Al Shmakhy

Abstract This paper outlines how the problem of previously unmanageable data volumes produced by distributed fiber optic well monitoring systems is solved through the use of the latest sensing and analytics platform. The platform significantly reduces fiber optic data volumes enabling data to be streamed, processed, stored and visualized; all in real-time. The platform was effectively utilized for real-time data processing and visualization of well injection profiles of fields in the Middle East. The platform addresses the big data challenge associated with streaming distributed fiber optic data in three key areas: Edge processing reduces Distributed Fiber Optic (DFO) data rates by orders of magnitude so it can be streamed from the edge to the end user in real-time.Tiled data storage utilizes innovative data storage strategy to enable fast query responses whether visualizing years or just seconds of DFO data.Elastic infrastructure of processing and storage enables the platform to seamlessly scale and handle variable data rates. Raw Distributed Acoustic Sensing (DAS) data can be generated at rates of 100 MBs per second and cannot feasibly be transferred over a standard internet connection. The sensing and analytics platform's algorithms extract features at the edge which reduce data rates by three orders of magnitude whilst still preserving all key information from the data. Processed DFO data is aggregated and tiled in real-time at tens of different resolutions with respect to both time and fiber length. This enables sub-second query response times even when requesting DFO data across years of historical data. All platform processing logic is designed to run asynchronously on serverless infrastructure. This enables the platform's infrastructure to rapidly scale up or down in response to variable data rates. The result is a cloud-based visualization dashboard capable of displaying DFO data in near real-time across any time range and fiber length. Use of this sensing and analytics platform allowed for seamless streaming of fiber optic data on the Middle East field for injection monitoring, allowing the operator to visualize injection profiles and optimize the injection program in real-time. This sensing and analytics fiber management platform enables the user to highly successfully stream and visualize DFO data in real-time. It enables visibility into the subsurface for production and injection wells, enabling field-wide efficiencies and optimization.


Author(s):  
Dazhong Wu ◽  
Janis Terpenny ◽  
Li Zhang ◽  
Robert Gao ◽  
Thomas Kurfess

Over the past few decades, both small- and medium-sized manufacturers as well as large original equipment manufacturers (OEMs) have been faced with an increasing need for low cost and scalable intelligent manufacturing machines. Capabilities are needed for collecting and processing large volumes of real-time data generated from manufacturing machines and processes as well as for diagnosing the root cause of identified defects, predicting their progression, and forecasting maintenance actions proactively to minimize unexpected machine down times. Although cloud computing enables ubiquitous and instant remote access to scalable information and communication technology (ICT) infrastructures and high volume data storage, it has limitations in latency-sensitive applications such as high performance computing and real-time stream analytics. The emergence of fog computing, Internet of Things (IoT), and cyber-physical systems (CPS) represent radical changes in the way sensing systems, along with ICT infrastructures, collect and analyze large volumes of real-time data streams in geographically distributed environments. Ultimately, such technological approaches enable machines to function as an agent that is capable of intelligent behaviors such as automatic fault and failure detection, self-diagnosis, and preventative maintenance scheduling. The objective of this research is to introduce a fog-enabled architecture that consists of smart sensor networks, communication protocols, parallel machine learning software, and private and public clouds. The fog-enabled architecture will have the potential to enable large-scale, geographically distributed online machine and process monitoring, diagnosis, and prognosis that require low latency and high bandwidth in the context of data-driven cyber-manufacturing systems.


Author(s):  
Nikhil S. Rajguru, Et. al.

Traffic boards and traffic signals are used to maintain proper traffic through busy roads. They help to recognize the rules to follow when driving the vehicle. These signs warn the distracted driver, and prevent his/her actions which could lead to an accident. We have proposed a system which can help recognize these boards and signals at real time thus avoiding major mishap. A real-time automatic sign detection and recognition can help the driver, significantly increasing his/her safety. Lately traffic sign recognition has got an immense interest lately by large scale companies such as Google, Apple and Volkswagen etc. which is driven by the market needs for intelligent applications such as autonomous driving, driver assistance systems (ADAS), mobile mapping, Mobil eye, Apple, etc.  Hence, here, we have implemented to do the same with cost efficient manner using Raspberry Pi. The proposed system detects the traffic board or traffic signals, capture its image which through deep learning approach recognizes the same to give result on dashboard as well it gives the measures of distance from front obstacle which helps to implement brake system if obstacle is near. PiCam is used to capture images of traffic sings and is connected to RaspberryPi. Monitor is used to display required output, showing type of sign and distance of collision. This proposal will avoid large number of accidents occurring at bridges and work in progress area due to automated braking system and simultaneous reduce death ratio.


Sign in / Sign up

Export Citation Format

Share Document