data smoothing
Recently Published Documents


TOTAL DOCUMENTS

236
(FIVE YEARS 35)

H-INDEX

23
(FIVE YEARS 3)

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7955
Author(s):  
Daniel Jie Yuan Chin ◽  
Ahmad Sufril Azlan Mohamed ◽  
Khairul Anuar Shariff ◽  
Mohd Nadhir Ab Wahab ◽  
Kunio Ishikawa

Three-dimensional reconstruction plays a vital role in assisting doctors and surgeons in diagnosing the healing progress of bone defects. Common three-dimensional reconstruction methods include surface and volume rendering. As the focus is on the shape of the bone, this study omits the volume rendering methods. Many improvements have been made to surface rendering methods like Marching Cubes and Marching Tetrahedra, but not many on working towards real-time or near real-time surface rendering for large medical images and studying the effects of different parameter settings for the improvements. Hence, this study attempts near real-time surface rendering for large medical images. Different parameter values are experimented on to study their effect on reconstruction accuracy, reconstruction and rendering time, and the number of vertices and faces. The proposed improvement involving three-dimensional data smoothing with convolution kernel Gaussian size 5 and mesh simplification reduction factor of 0.1 is the best parameter value combination for achieving a good balance between high reconstruction accuracy, low total execution time, and a low number of vertices and faces. It has successfully increased reconstruction accuracy by 0.0235%, decreased the total execution time by 69.81%, and decreased the number of vertices and faces by 86.57% and 86.61%, respectively.


Author(s):  
Daniel Jie Yuan Chin ◽  
Ahmad Sufril Azlan Mohamed ◽  
Khairul Anuar Shariff ◽  
Mohd Nadhir Ab Wahab ◽  
Kunio Ishikawa

Three-dimensional reconstruction plays an important role in assisting doctors and surgeons in diagnosing bone defects’ healing progress. Common three-dimensional reconstruction methods include surface and volume rendering. As the focus is on the shape of the bone, volume rendering is omitted. Many improvements have been made on surface rendering methods like Marching Cubes and Marching Tetrahedra, but not many on working towards real-time or near real-time surface rendering for large medical images, and studying the effects of different parameter settings for the improvements. Hence, in this study, an attempt towards near real-time surface rendering for large medical images is made. Different parameter values are experimented on to study their effect on reconstruction accuracy, reconstruction and rendering time, and the number of vertices and faces. The proposed improvement involving three-dimensional data smoothing with convolution kernel Gaussian size 0.5 and mesh simplification reduction factor of 0.1, is the best parameter value combination for achieving a good balance between high reconstruction accuracy, low total execution time, and a low number of vertices and faces. It has successfully increased the reconstruction accuracy by 0.0235%, decreased the total execution time by 69.81%, and decreased the number of vertices and faces by 86.57% and 86.61% respectively.


Water ◽  
2021 ◽  
Vol 13 (20) ◽  
pp. 2817
Author(s):  
Epaminondas Sidiropoulos ◽  
Konstantinos Vantas ◽  
Vlassios Hrissanthou ◽  
Thomas Papalaskaris

The present paper deals with the applicability of the Meyer–Peter and Müller (MPM) bed load transport formula. The performance of the formula is examined on data collected in a particular location of Nestos River in Thrace, Greece, in comparison to a proposed Εnhanced MPM (EMPM) formula and to two typical machine learning methods, namely Random Forests (RF) and Gaussian Processes Regression (GPR). The EMPM contains new adjustment parameters allowing calibration. The EMPM clearly outperforms MPM and, also, it turns out to be quite competitive in comparison to the machine learning schemes. Calibrations are repeated with suitably smoothed measurement data and, in this case, EMPM outperforms MPM, RF and GPR. Data smoothing for the present problem is discussed in view of a special nearest neighbor smoothing process, which is introduced in combination with nonlinear regression.


SPE Journal ◽  
2021 ◽  
pp. 1-15
Author(s):  
Tran Ngoc Trung ◽  
Trieu Hung Truong ◽  
Tran Vu Tung ◽  
Ngo Huu Hai ◽  
Dao Quang Khoa ◽  
...  

Summary For any oil and gas company, well-testing and performance-monitoring programs are expensive because of the cost of equipment and personnel. In addition, it may not be possible to obtain all of the necessary data for a reservoir for a period of time because of production demand constraints or changes in surface process conditions. To overcome these challenges, there are many studies on the implementation and value of virtual flowmetering (VFM) for real-time well performance prediction without any need for a comprehensive well-testingprogram. This paper presents the VFM model using an adaptive neuro-fuzzy inference system (ANFIS) at Hai Thach-Moc Tinh (HT-MT) gas-condensate field, offshore Vietnam. The ANFIS prediction model can tune all its membership functions (MFs) and consequent parameters to formulate the given inputs to the desired output with minimum error. In addition, ANFIS is a successful technique used to process large amounts of complex time series data and multiple nonlinear inputs-outputs (Salleh et al. 2017), thereby enhancing predictability. The authors have built ANFIS models combined with large data sets, data smoothing, and k-fold cross-validation methods based on the actual historical surface parameters such as choke valve opening, surface pressure, temperature, the inlet pressure of the gas processing system, etc. The prediction results indicate that the local regression “loess” data smoothing method reduces the processing time and gives both clustering algorithms the best results among the different data preprocessing techniques [highest value of R and lowest value of mean squared error (MSE), error mean, and error standard deviation]. The k-fold cross-validation technique demonstrates the capability to avoid the overfitting phenomenon and enhance prediction accuracy for the ANFIS subtractive clustering model. The fuzzy C-mean (FCM) model in the present study can predict the gas condensate production with the smallest root MSE (RMSE) of 0.0645 and 0.0733; the highest coefficient of determination (R2) of 0.9482 and 0.9337; and the highest variance account of 0.9482 and 0.9334 for training and testing data, respectively. Applied at the HT-MT field, the model allows the rate estimation of the gas and condensate production and facilitates the virtual flowmeter workflow using the ANFIS model.


Information ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 354
Author(s):  
Antonios Andreatos ◽  
Apostolos Leros

A common problem in underwater side-scan sonar images is the acoustic shadow generated by the beam. Apart from that, there are a number of reasons impairing image quality. In this paper, an innovative algorithm with two alternative histogram approximation methods is presented. Histogram approximation is based on automatically estimating the optimal threshold for converting the original gray scale images into binary images. The proposed algorithm clears the shadows and masks most of the impairments in side-scan sonar images. The idea is to select a proper threshold towards the rightmost local minimum of the histogram, i.e., closest to the white values. For this purpose, the histogram envelope is approximated by two alternative contour extraction methods: polynomial curve fitting and data smoothing. Experimental results indicate that the proposed algorithm produces superior results than popular thresholding methods and common edge detection filters, even after corrosion expansion. The algorithm is simple, robust and adaptive and can be used in automatic target recognition, classification and storage in large-scale multimedia databases.


2021 ◽  
Author(s):  
Saeed Aftab ◽  
Rasoul Hamidzadeh Moghadam

Abstract Well logging is an essential approach to making geophysical surveys and petrophysical measurements and plays a key role to interpret downhole conditions. But, well logging signals usually contain noise that distorts results and causes ambiguous interpretations. In this paper, the wavelet filter and robust data smoothing algorithms are tested for denoising synthetic sonic log and field sonic log data. Robust data smoothing algorithms include Gaussian, RLOESS (Robust locally estimating scatterplot smoothing), and RLOWESS (Robust locally weighted scatterplot smoothing) methods. Uniform and normal distribution noise applied to synthetic model and results revealed that the wavelet filter performs better than data smoothing algorithms for denoising uniform distribution noise. However, the RLOESS removed uniform noise acceptably. But, for normal distribution noise, the wavelet filter disrupts and data smoothing algorithms, specifically RLOESS attenuated noise perfectly. Due to the noise nature of field sonic log data, wavelet filter completely disrupts, but data smoothing algorithms removed the noise of field data more efficiently, particularly RLOESS. So, we can express that RLOESS is a perfect algorithm for denoising sonic log signals, regardless of noise nature.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Ala Bazyleva ◽  
William E. Acree ◽  
Robert D. Chirico ◽  
Vladimir Diky ◽  
Glenn T. Hefter ◽  
...  

Abstract This article is the first of three projected IUPAC Technical Reports resulting from IUPAC Project 2011-037-2-100 (Reference Materials for Phase Equilibrium Studies). The goal of this project is to select reference systems with critically evaluated property values for the validation of instruments and techniques used in phase equilibrium studies of mixtures. This report proposes seven systems for liquid–liquid equilibrium studies, covering the four most common categories of binary mixtures: aqueous systems of moderate solubility, non-aqueous systems, systems with low solubility, and systems with ionic liquids. For each system, the available literature sources, accepted data, smoothing equations, and estimated uncertainties are given.


2021 ◽  
Author(s):  
David Lopez-Garia ◽  
Jose M.G. Penalver ◽  
Juan M. Gorriz ◽  
Maria Ruz

MVPAlab is a MATLAB-based and very flexible decoding toolbox for multidimensional electroencephalography and mag-netoencephalography data. The MVPAlab Toolbox implements several machine learning algorithms to compute multivari-ate pattern analyses, cross-classification, temporal generalization matrices and feature and frequency contribution anal-yses. It also provides access to an extensive set of preprocessing routines for, among others, data normalization, data smoothing, dimensionality reduction and supertrials generation. To draw statistical inferences at the group level, MVPAlab includes a non-parametric cluster-based permutation approach. This toolbox has been designed to include an easy-to-use and very intuitive graphic user interface and data representation software, which makes MVPAlab a very convenient tool for those users with few or no previous coding experience. However, MVPAlab is not for beginners only, as it implements several high and low-level routines allowing more experienced users to design their own projects in a highly flexible manner.


Author(s):  
Justice Kwame Appati ◽  
Prince Kofi Nartey ◽  
Ebenezer Owusu ◽  
Ismail Wafaa Denwar

Biometrics consists of scientific methods of using a person’s unique physiological or behavioral traits for electronic identification and verification. The traits for biometric identification are fingerprint, voice, face, and palm print recognition. However, this study considers fingerprint recognition for in-person identification since they are distinctive, reliable, and relatively easy to acquire. Despite the many works done, the problem of accuracy still persists which perhaps can be attributed to the varying characteristic of the acquisition devices. This study seeks to improve the issue recognition accuracy with the proposal of the fusion of a two transform and minutiae models. In this study, a transform-minutiae fusion-based model for fingerprint recognition is proposed. The first transform technique, thus wave atom transform, was used for data smoothing while the second transform, thus wavelet, was used for feature extraction. These features were added to the minutiae features for person recognition. Evaluating the proposed design on the FVC 2002 dataset showed a relatively better performance compared to existing methods with an accuracy measure of 100% as to 96.67% and 98.55% of the existing methods.


2021 ◽  
Vol 15 ◽  
pp. 174830262110084
Author(s):  
Bishnu P Lamichhane ◽  
Elizabeth Harris ◽  
Quoc Thong Le Gia

We compare a recently proposed multivariate spline based on mixed partial derivatives with two other standard splines for the scattered data smoothing problem. The splines are defined as the minimiser of a penalised least squares functional. The penalties are based on partial differential operators, and are integrated using the finite element method. We compare three methods to two problems: to remove the mixture of Gaussian and impulsive noise from an image, and to recover a continuous function from a set of noisy observations.


Sign in / Sign up

Export Citation Format

Share Document