data interpolation
Recently Published Documents


TOTAL DOCUMENTS

433
(FIVE YEARS 99)

H-INDEX

30
(FIVE YEARS 3)

MAUSAM ◽  
2021 ◽  
Vol 43 (3) ◽  
pp. 269-272
Author(s):  
J. N. KANAUJIA ◽  
SURINDER KAUR ◽  
D. S. Upadhyay

The correlation between two series of rainfall recorded at two stations which are at short distance, is usually found significant. This information has important applicability in the areas of data interpolation, network design, transfer of information in respect of missing data and deriving areal rainfall from point values. In this paper 70-year (1901-1970) annual rainfall data for about 1500 stations in India have been analysed. The distribution of correlation coefficient (r) for the stations located within a distance of 40 km were obtained. Attempt has been made to derive theoretical model of r. For this purpose two distributions, (1) a two parameter -distribution and (2) a two parameter bounded distribution, have been chosen as in both cases the variable ranges from 0 to 1.  


Geophysics ◽  
2021 ◽  
pp. 1-46
Author(s):  
Tao Chen ◽  
Dikun Yang

Data interpolation is critical in the analysis of geophysical data when some data is missing or inaccessible. We propose to interpolate irregular or missing potential field data using the relation between adjacent data points inspired by the Taylor series expansion (TSE). The TSE method first finds the derivatives of a given point near the query point using data from neighboring points, and then uses the Taylor series to obtain the value at the query point. The TSE method works by extracting local features represented as derivatives from the original data for interpolation in the area of data vacancy. Compared with other interpolation methods, the TSE provides a complete description of potential field data. Specifically, the remainder in TSE can measure local fitting errors and help obtain accurate results. Implementation of the TSE method involves two critical parameters – the order of the Taylor series and the number of neighbors used in the calculation of derivatives. We have found that the first parameter must be carefully chosen to balance between the accuracy and numerical stability when data contains noise. The second parameter can help us build an over-determined system for improved robustness against noise. Methods of selecting neighbors around the given point using an azimuthally uniform distribution or the nearest-distance principle are also presented. The proposed approach is first illustrated by a synthetic gravity dataset from a single survey line, then is generalized to the case over a survey grid. In both numerical experiments, the TSE method has demonstrated an improved interpolation accuracy in comparison with the minimum curvature method. Finally we apply the TSE method to a ground gravity dataset from the Abitibi Greenstone Belt, Canada, and an airborne gravity dataset from the Vinton Dome, Louisiana, USA.


Author(s):  
Albert Asratyan ◽  
Sina Sheikholeslami ◽  
Vladimir Vlassov

2021 ◽  
Vol 26 (5) ◽  
pp. 23-32
Author(s):  
Jehan Mohammed Al-Ameri

  In this paper, we use an empirical equation and cubic spline interpolation to fit Covid-19 data available for accumulated infections and deaths in Iraq. For Scientific visualization of data interpretation, it is useful to use interpolation methods for purposes fitting by data interpolation. The data used is from 3 January 2020 to 21 January 2021 in order to obtain graphs to analysing the rate of increasing the pandemic and then obtain predicted values for the data infections and deaths in that period of time. Stochastic fit to the data of daily infections and deaths of Covid-19 is also discussed and showed in figures. The results of the cubic splines and the empirical equation used will be numerically compared. The principle of least square errors will be used for both these interpolations. The numerical results will be indicated that the cubic spline gives an accurate fitting to data.


Geophysics ◽  
2021 ◽  
pp. 1-57
Author(s):  
Yang Liu ◽  
Geng WU ◽  
Zhisheng Zheng

Although there is an increase in the amount of seismic data acquired with wide-azimuth geometry, it is difficult to achieve regular data distributions in spatial directions owing to limitations imposed by the surface environment and economic factor. To address this issue, interpolation is an economical solution. The current state of the art methods for seismic data interpolation are iterative methods. However, iterative methods tend to incur high computational cost which restricts their application in cases of large, high-dimensional datasets. Hence, we developed a two-step non-iterative method to interpolate nonstationary seismic data based on streaming prediction filters (SPFs) with varying smoothness in the time-space domain; and we extended these filters to two spatial dimensions. Streaming computation, which is the kernel of the method, directly calculates the coefficients of nonstationary SPF in the overdetermined equation with local smoothness constraints. In addition to the traditional streaming prediction-error filter (PEF), we proposed a similarity matrix to improve the constraint condition where the smoothness characteristics of the adjacent filter coefficient change with the varying data. We also designed non-causal in space filters for interpolation by using several neighboring traces around the target traces to predict the signal; this was performed to obtain more accurate interpolated results than those from the causal in space version. Compared with Fourier Projection onto a Convex Sets (POCS) interpolation method, the proposed method has the advantages such as fast computational speed and nonstationary event reconstruction. The application of the proposed method on synthetic and nonstationary field data showed that it can successfully interpolate high-dimensional data with low computational cost and reasonable accuracy even in the presence of aliased and conflicting events.


2021 ◽  
Vol 2068 (1) ◽  
pp. 012010
Author(s):  
Bolun Wang ◽  
Xin Jiang ◽  
Guanying Huo ◽  
Cheng Su ◽  
Dongming Yan ◽  
...  

Abstract B-splines are widely used in the fields of reverse engineering and computer-aided design, due to their superior properties. Traditional B-spline surface interpolation algorithms usually assume regularity of the data distribution. In this paper, we introduce a novel B-spline surface interpolation algorithm: KPI, which can interpolate sparsely and non-uniformly distributed data points. As a two-stage algorithm, our method generates the dataset out of the sparse data using Kriging, and uses the proposed KPI (Key-Point Interpolation) method to generate the control points. Our algorithm can be extended to higher dimensional data interpolation, such as reconstructing dynamic surfaces. We apply the method to interpolating the temperature of Shanxi Province. The generated dynamic surface accurately interpolates the temperature data provided by the weather stations, and the preserved dynamic characteristics can be useful for meteorology studies.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-16 ◽  
Author(s):  
Shi-di Miao ◽  
Si-qi Li ◽  
Xu-yang Zheng ◽  
Rui-tao Wang ◽  
Jing Li ◽  
...  

Research on clinical data sets of Alzheimer’s disease can predict and develop early intervention treatment. Missing data is a common problem in medical research. Failure to deal with more missing data will reduce the efficiency of the test, resulting in information loss and result bias. To address these issues, this paper designs and implements the missing data interpolation method of mixed interpolation according to columns by combining the four methods of mean interpolation, regression interpolation, support vector machine (SVM) interpolation, and multiple interpolation. By comparing the effects of the mixed interpolation method with the above four interpolation methods and giving the comparison results, the experiment shows that the results of the mixed interpolation method under different data missing rates have better performance in terms of root mean square error (RMSE), mean absolute error (MSE), and error rate, which proves the effectiveness of the interpolation mechanism. The characteristics of different variables might lead to different interpolation strategy choices, and column-by-column mixed interpolation can dynamically select the best method according to the difference of features. To a certain extent, it selects the best method suitable for each feature and improves the interpolation effect of the data set as a whole, which is beneficial to the clinical study of Alzheimer’s disease. In addition, in the processing of missing data, a combination of deletion method and interpolation method is adopted with reference to expert knowledge. Compared with the direct interpolation method, the data set obtained by this method is more accurate.


2021 ◽  
Author(s):  
Francesco Picetti ◽  
Vincenzo Lipari ◽  
Paolo Bestagini ◽  
Stefano Tubaro

2021 ◽  
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Prashanth Nadukandi ◽  
German Ocampo Botero ◽  
...  

2021 ◽  
Author(s):  
Danhui Wang ◽  
Feipeng Li ◽  
Yijie Zhang ◽  
Jinghuai Gao

Sign in / Sign up

Export Citation Format

Share Document