blended data
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 17)

H-INDEX

10
(FIVE YEARS 1)

Geophysics ◽  
2021 ◽  
pp. 1-76
Author(s):  
Siyuan Chen ◽  
Siyuan Cao ◽  
Yaoguang Sun

In the process of separating blended data, conventional methods based on sparse inversion assume that the primary source is coherent and the secondary source is randomized. The L1-norm, the commonly used regularization term, uses a global threshold to process the sparse spectrum in the transform domain; however, when the threshold is relatively high, more high-frequency information from the primary source will be lost. For this reason, we analyze the generation principle of blended data based on the convolution theory and then conclude that the blended data is only randomly distributed in the spatial domain. Taking the slope-constrained frequency-wavenumber ( f- k) transform as an example, we propose a frequency-dependent threshold, which reduces the high-frequency loss during the deblending process. Then we propose to use a structure weighted threshold in which the energy from the primary source is concentrated along the wavenumber direction. The combination of frequency and structure-weighted thresholds effectively improves the deblending performance. Model and field data show that the proposed frequency-structure weighted threshold has better frequency preservation than the global threshold. The weighted threshold can better retain the high-frequency information of the primary source, and the similarity between other frequency-band data and the unblended data has been improved.


Author(s):  
Woodon Jeong ◽  
Constantinos Tsingas ◽  
Mohammed S. Almubarak
Keyword(s):  

Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Breno Bahia ◽  
Rongzhi Lin ◽  
Mauricio Sacchi

Denoisers can help solve inverse problems via a recently proposed framework known as regularization by denoising (RED). The RED approach defines the regularization term of the inverse problem via explicit denoising engines. Simultaneous source separation techniques, being themselves a combination of inversion and denoising methods, provide a formidable field to explore RED. We investigate the applicability of RED to simultaneous-source data processing and introduce a deblending algorithm named REDeblending (RDB). The formulation permits developing deblending algorithms where the user can select any denoising engine that satisfies RED conditions. Two popular denoisers are tested, but the method is not limited to them: frequency-wavenumber thresholding and singular spectrum analysis. We offer numerical blended data examples to showcase the performance of RDB via numerical experiments.


2020 ◽  
Vol 12 (17) ◽  
pp. 2861
Author(s):  
Jifu Yin ◽  
Xiwu Zhan ◽  
Jicheng Liu

Soil moisture plays a vital role for the understanding of hydrological, meteorological, and climatological land surface processes. To meet the need of real time global soil moisture datasets, a Soil Moisture Operational Product System (SMOPS) has been developed at National Oceanic and Atmospheric Administration to produce a one-stop shop for soil moisture observations from all available satellite sensors. What makes the SMOPS unique is its near real time global blended soil moisture product. Since the first version SMOPS publicly released in 2010, the SMOPS has been updated twice based on the users’ feedbacks through improving retrieval algorithms and including observations from new satellite sensors. The version 3.0 SMOPS has been operationally released since 2017. Significant differences in climatological averages lead to remarkable distinctions in data quality between the newest and the older versions of SMOPS blended soil moisture products. This study reveals that the SMOPS version 3.0 has overwhelming advantages of reduced data uncertainties and increased correlations with respect to the quality controlled in situ measurements. The new version SMOPS also presents more robust agreements with the European Space Agency’s Climate Change Initiative (ESA_CCI) soil moisture datasets. With the higher accuracy, the blended data product from the new version SMOPS is expected to benefit the hydrological, meteorological, and climatological researches, as well as numerical weather, climate, and water prediction operations.


Atmosphere ◽  
2020 ◽  
Vol 11 (9) ◽  
pp. 939
Author(s):  
Chung-Chieh Wang ◽  
Sahana Paul ◽  
Dong-In Lee

This study describes a recently developed object-oriented method suitable for Taiwan for the purpose to verify quantitative precipitation forecasts (QPFs) produced by mesoscale models as a complement to the traditional approaches in existence. Using blended data from the rain-gauge network in Taiwan and the Tropical Rainfall Measuring Mission (TRMM) as the observation, the method developed herein is applied to twice-daily 0–48 h QPFs produced by the Cloud-Resolving Storm Simulator (CReSS) during the South-West Monsoon Experiment (SoWMEX) in May–June 2008. In this method, rainfall objects are identified through a procedure that includes smoothing and thresholding. Various attribute parameters and the characteristics of observed and forecast rain-area objects are then compared and discussed. Both the observed and the QPF frequency distributions of rain-area objects with respect to total water production, object size, and rainfall are similar to chi-distribution, with highest frequency at smaller values and decreased frequencies toward greater values. The model tends to produce heavier rainfall than observation, while the latter exhibits a higher percentage of larger objects with weaker rainfall intensity. The distributions of shape-related attributes are similar between QPF and observed rainfall objects, with more northeast–southwest oriented and fewer northwest–southeast oriented objects. Both observed and modeled object centroid locations have relative maxima over the terrain of Taiwan, indicating reasonable response to the topography. The above results are consistent with previous studies.


2020 ◽  
Vol 39 (3) ◽  
pp. 188-194
Author(s):  
Rolf H. Baardman ◽  
Rob F. Hegge

Machine learning has grown into a topic of much interest in the seismic industry. Recently, machine learning was introduced in the field of seismic processing for applications such as demultiple, regularization, and tomography. Here, two novel machine learning algorithms are introduced that can perform deblending and automated blending noise classification. Conventional deblending algorithms require a priori information and user expertise to properly select and parameterize a specific algorithm. The potential benefits of machine learning methods include their hands-off implementation and their ability to learn an efficient deblending algorithm directly from data. The introduced methods are supervised learning methods. Their specific tasks (deblending/noise classification) are learned from training data consisting of data example pairs of input and labeled output. For instance, training a deblending algorithm requires pairs of blended data with their unblended counterparts. The availability of training data or the possibility of creating training data are key to the success of these supervised methods. Another aspect is how well the algorithms generalize. Can we expect good performance on (unseen) data that vary from the training data? We address these aspects and further illustrate with synthetic and field data examples. The classification and deblending examples show promising results, indicating that these machine learning algorithms can support and/or replace existing deblending approaches.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. WA13-WA26 ◽  
Author(s):  
Jing Sun ◽  
Sigmund Slang ◽  
Thomas Elboth ◽  
Thomas Larsen Greiner ◽  
Steven McDonald ◽  
...  

For economic and efficiency reasons, blended acquisition of seismic data is becoming increasingly commonplace. Seismic deblending methods are computationally demanding and normally consist of multiple processing steps. Furthermore, the process of selecting parameters is not always trivial. Machine-learning-based processing has the potential to significantly reduce processing time and to change the way seismic deblending is carried out. We have developed a data-driven deep-learning-based method for fast and efficient seismic deblending. The blended data are sorted from the common-source to the common-channel domain to transform the character of the blending noise from coherent events to incoherent contributions. A convolutional neural network is designed according to the special characteristics of seismic data and performs deblending with results comparable to those obtained with conventional industry deblending algorithms. To ensure authenticity, the blending was performed numerically and only field seismic data were used, including more than 20,000 training examples. After training and validating the network, seismic deblending can be performed in near real time. Experiments also indicate that the initial signal-to-noise ratio is the major factor controlling the quality of the final deblended result. The network is also demonstrated to be robust and adaptive by using the trained model to first deblend a new data set from a different geologic area with a slightly different delay time setting and second to deblend shots with blending noise in the top part of the record.


AERA Open ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. 233285841989906
Author(s):  
Rebecca L. Brower ◽  
Christine G. Mokher ◽  
Tamara Bertrand Jones ◽  
Bradley E. Cox ◽  
Shouping Hu

This multiple case study examines the extent and ways in which leaders and administrators in Florida College System (FCS) institutions engage in distributed leadership through data sharing with frontline staff. Based on focus groups and individual interviews with administrators, faculty, and staff (659 participants) from 21 state colleges, we found a continuum of three data cultures ranging from democratic data cultures to blended data cultures to “need to know” data cultures. We triangulate these results with survey data from FCS institutional leaders and find considerable variation in the extent of data sharing and perceptions of effectiveness of institutional data use. Institutions with democratic data cultures tended to have distributed leadership that encouraged information sharing and collaboration among staff to use data to inform change. Need-to-know institutions faced challenges, including weak data quality, concerns about adequate time and resources among staff for reviewing data, and perceptions that staff lack data literacy skills.


2019 ◽  
Author(s):  
Catherine T. Lawson ◽  
Alex Muro ◽  
Eric Krans

AbstractAs sources of “Big Data” continue to grow, transportation planners and researchers seek to utilize these new resources. Given the current dependency on traditional transportation data sources and conventional tools (e.g., spreadsheets and propriety models), how can these new resources be used? This research examines a “blended data” approach, using a web-based, open source platform to assist transit agencies to forecast bus ridership. The platform is capable of incorporating new Big Data sources and traditional data sources, using modern processing techniques and tools, particularly Application Programming Interfaces (APIs). This research demonstrates the use of APIs in a transit demand methodology that yields a robust model for bus ridership. The approach uses the Census Transportation Planning Products data, modified with American Community Survey data, to generate origin–destination tables for bus trips in a designated market area. Microsimulation models us a transit scheduling specification (General Transit Feed Specification) and an open source routing engine (OpenTripPlanner). Local farebox data validates the microsimulation models. Analyses of model output and farebox data for the Atlantic City transit market area, and a scenario analysis of service reduction in the Princeton/Trenton transit market area, illustrate the use a “blended approach” for bus ridership forecasting.


Sign in / Sign up

Export Citation Format

Share Document