scholarly journals Statistical Analysis of Image with Various Noises and Filters

2020 ◽  
Vol 1 (1) ◽  
pp. 41-51
Author(s):  
SAJID KHAN ◽  
Uzma Sadiq ◽  
Sayyad Khurshid

Research displays an extensive investigation for different factual statistical estimates and their practical implementation in picture handling with various noises and filter channel procedures. Noise is very challenging to take out it from the digital images. The purpose of image filtering is to eliminate the noise from the image in such a way that the new image is detectible. We have clarified different calculations and systems for channel the pictures and which calculation is the best for sifting the picture. Signal and maximum Peak proportion parameters are utilized for execution for factual estimating, Wiener channel performs preferred in evacuating clamor over different channels. Wiener channel functions admirably for a wide range of clamors. The exhibition of Gaussian channel is superior to anything Mean channel, Mask Filter and Wiener channel as per MSE results. In picture setting up, a Gaussian fog generally called Gaussian smoothing is the result of darkening an image by a Gaussian limit. We reason that Gaussian separating approach is the best method that can be effectively actualized with the assistance of the MSE of picture. The Gaussian channel is certifiably superior to different calculations at expelling clamor. The outcomes have been looked at for channels utilizing SNR, PSNR and Mean Square Error esteem.

1964 ◽  
Vol 18 (1) ◽  
pp. 117-143 ◽  
Author(s):  
D. B. Spalding ◽  
S. W. Chi

The theoretical treatments given by earlier authors are classified, reviewed and where necessary extended; then the predictions of twenty of these theories are evaluated and compared with all available experimental data, the root-meansquare error being computed for each theory. The theory of van Driest-II gives the lowest root-mean-square error (11.0%).A new calculation procedure is developed from the postulate that a unique relation exists betweencfFcandRFRwherecfis the drag coefficient,Ris the Reynolds number, andFcandFRare functions of Mach number and temperature ratio alone. The experimental data are found to be too scanty for bothFcandFRto be deduced empirically, soFcis calculated by means of mixing-length theory andFRis found semi-empirically. Tables and charts of values ofFcandFRare presented for a wide range ofMGandTS/TG. When compared with all experimental data, the predictions of the new procedure give a root-mean-square error of 9.9%.


2018 ◽  
Vol 40 ◽  
pp. 112
Author(s):  
Adriana Aparecida Moreira ◽  
Daniela Santini Adamatti ◽  
Anderson Luis Ruhoff

This study aims to evaluate the performance of MOD16 and GLEAM evapotranspiration (ET) datasets in nine eddy covariance monitoring sites. Data from both ET products were downloaded and its daily means calculated. Evapotranspiration estimations were then compared to the observed ET in the eddy covariance monitoring sites from the Large-Scale Biosphere-Atmosphere Experiment in the Amazon (LBA). We performed a statistical analysis using the correlation coefficient (R), the root mean square error (RMSE) and BIAS. Results indicate that, in general, both products can represent the observed ET in the eddy covariance flux towers. MOD16 and GLEAM showed similar values to the calculated statistics when ET estimates were compared to observed ET. Model estimates and eddy covariance flux towers are subject to uncertainties that influence the analysis of remotely-sensed ET products.


Author(s):  
Beatriz García Castellanos ◽  
Osney Pérez Ones ◽  
Lourdes Zumalacárregui de Cárdenas ◽  
Idania Blanco Carvajal ◽  
Luis Eduardo López de la Maza

The rum aging process shows volume losses, called wastage. The operation variables: product, boardwalk, horizontal and vertical positions, date, volume, alcoholic degree, temperature, humidity and aging time, recorded in databases, contain valuable information to study the process. The qualitative variables were processed using Weka 3.8.0 software while the quantitative variables underwent a statistical analysis using Statgraphics Centurion XVII.2. The biggest reductions correspond to barrels located in areas which solar irradiation, favoring the evaporation of the product. The variable temperature and humidity present very high variation coefficients; these factors are uncontrolled so a regulation process is suggested. A regression model was obtained that predicts the losses based on the variables: numerical month volume and aging time with mean square error values (ECM) and R2 of 0.115 and 95.88 % respectively.


2019 ◽  
Vol 7 (3) ◽  
pp. SE151-SE159 ◽  
Author(s):  
Kachalla Aliyuda ◽  
John Howell

The methods used to estimate recovery factor change through the life cycle of a field. During appraisal, prior to development when there are no production data, we typically rely on analog fields and empirical methods. Given the absence of a perfect analog, these methods are typically associated with a wide range of uncertainty. During plateau, recovery factors are typically associated with simulation and dynamic modeling, whereas in later field life, once the field drops off the plateau, a decline curve analysis is also used. The use of different methods during different stages of the field life leads to uncertainty and potential inconsistencies in recovery estimates. A wide range of interacting, partially related, reservoir and production variables controls the production and recovery factor. Machine learning allows more complex multivariate analysis that can be used to investigate the roles of these variables using a training data set and then to ultimately predict future performance in fields. To investigate this approach, we used a data set consisting of producing reservoirs all of which are at plateau or in decline to train a series of machine-learning algorithms that can potentially predict the recovery factor with minimal percentage error. The database for this study consists of categorical and numerical properties for 93 reservoirs from the Norwegian Continental Shelf. Of these, 75 are from the Norwegian Sea, the Norwegian North Sea, and the Barents Sea, whereas the remaining 18 reservoirs are from the Viking Graben in the UK sector of the North Sea. The data set was divided into training and testing sets: The training set comprised approximately 80% of the total data, and the remaining 20% was the testing set. Linear regression models and a support vector machine (SVM) models were trained with all parameters in the data set (30 parameters); then with the 16 most influential parameters in the data set, the performance of these models was compared from results of fivefold crossvalidation. SVM training using a combination of 16 geologic/engineering parameters models with Gaussian kernel function has a root-mean-square error of 0.12, mean square error of 0.01, and [Formula: see text]-squared of 0.76. This model was tested on 18 reservoirs from the testing set; the test results are very similar to crossvalidation results during models training phase, suggesting that this method can potentially be used to predict the future recovery factor.


Energies ◽  
2019 ◽  
Vol 12 (6) ◽  
pp. 1094 ◽  
Author(s):  
Moting Su ◽  
Zongyi Zhang ◽  
Ye Zhu ◽  
Donglan Zha

Natural gas is often described as the cleanest fossil fuel. The consumption of natural gas is increasing rapidly. Accurate prediction of natural gas spot prices would significantly benefit energy management, economic development, and environmental conservation. In this study, the least squares regression boosting (LSBoost) algorithm was used for forecasting natural gas spot prices. LSBoost can fit regression ensembles well by minimizing the mean squared error. Henry Hub natural gas spot prices were investigated, and a wide range of time series from January 2001 to December 2017 was selected. The LSBoost method is adopted to analyze data series at daily, weekly and monthly. An empirical study verified that the proposed prediction model has a high degree of fitting. Compared with some existing approaches such as linear regression, linear support vector machine (SVM), quadratic SVM, and cubic SVM, the proposed LSBoost-based model showed better performance such as a higher R-square and lower mean absolute error, mean square error, and root-mean-square error.


1985 ◽  
pp. 14-25
Author(s):  
Mohamad Ashraf

A design program with detailed illustrative example, for the linear equalizer that minimizes the mean square error due to intersymbol interference in its output signal, is presented. The results are evaluated for many types of distortion channels which have been selected from a wide range of common signal distortions.This includes the various combinations of amplitude and phase distortions.A synchroneous serial digital baseband signal is assumed throughout. The digital signal is transmitted over the linear time invariant baseband channel whose impulse response is known. The practical imple mentation of the filters and the techniques on the automatic or adaptive adjustment of the equalizer taps are not considered. The aim of the paper is to show the basic principles of the linear equalizer that minimizes the mean square error, with the aid of the design program and the example.The design of the linear equalizer is based on a statistical criterior in time domain and the study is confined to simple transversal equalizers whose tap gains do not vary except in response to a change in a channel.


Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1913
Author(s):  
Sofia Palionnaya ◽  
Oleg Shestakov

Problems with analyzing and processing high-dimensional random vectors arise in a wide variety of areas. Important practical tasks are economical representation, searching for significant features, and removal of insignificant (noise) features. These tasks are fundamentally important for a wide class of practical applications, such as genetic chain analysis, encephalography, spectrography, video and audio processing, and a number of others. Current research in this area includes a wide range of papers devoted to various filtering methods based on the sparse representation of the obtained experimental data and statistical procedures for their processing. One of the most popular approaches to constructing statistical estimates of regularities in experimental data is the procedure of multiple testing of hypotheses about the significance of observations. In this paper, we consider a procedure based on the false discovery rate (FDR) measure that controls the expected percentage of false rejections of the null hypothesis. We analyze the asymptotic properties of the mean-square error estimate for this procedure and prove the statements about the asymptotic normality of this estimate. The obtained results make it possible to construct asymptotic confidence intervals for the mean-square error of the FDR method using only the observed data.


2012 ◽  
Vol 67 (6-7) ◽  
pp. 327-332 ◽  
Author(s):  
Iqtadar Hussain ◽  
Tariq Shah ◽  
Muhammad Asif Gondal ◽  
Hasan Mahmood

The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes


Author(s):  
Serhiy Kryachok ◽  

Urgency of the research. Now manufacturers of geodetic instruments supply a variety of geodetic equipment to the market. An important place in this assortment belongs to electronic range finders. They combine in their design a goniometer, an electronic rangefinder and a processor that allows to solve various geodetic tasks based on the measurements. Target setting. To ensure reliable data on the measured distance, it is necessary to periodically determine the constant correction of the range finder of the electronic total station. This is especially true in the case of measuring distance using a reflector from an electronic total station of a different brand. Actual scientific researches and issues analysis. The method for determining the constant correction of the range finder of an electronic total station based on the results of binding to a double geodetic wall sign is described in [1]. The technology of measurements and the formula for determining the constant are shown. An unexplored parts of a common problem. In continuation of the topics given in [1], it is advisable to carry out a practical implementation of the method for determining the constant correction and establish its accuracy. The research objective. The main goal of this article is to determine the constant correction of the range finder of an electronic total station based on the data obtained from the results of binding to a double geodetic wall sign, and also to calculate its accuracy. The statement of basic materials. The approbation of the technology for determining the constant correction of the range finder of an electronic total station for the results of measurements made during binding to the double wall sign of the city polygonometry of Chernihiv is presented. The Trimble 3305 DR total station and reflector for binding to wall signs, developed at the Department of Geodesy, Cartography and Land Management of the Chernihiv Polytechnic National University, were used. As a result, a constant correction value of +22.6 mm was obtained. Formulas for calculating the mean square error of the constant correction for various options for binding to a double wall sign are derived and calculations are performed using these formulas. Conclusions. The developed technology was tested for determining the constant correction of the range finder of an electronic total station according to the measurement results obtained during binding to a double wall sign. Formulas are obtained and calculations are performed to determine the mean square error of the constant correction of the range finder of an electronic total station


2015 ◽  
Vol 23 (3) ◽  
pp. 313-335 ◽  
Author(s):  
Luke Keele

Many areas of political science focus on causal questions. Evidence from statistical analyses is often used to make the case for causal relationships. While statistical analyses can help establish causal relationships, it can also provide strong evidence of causality where none exists. In this essay, I provide an overview of the statistics of causal inference. Instead of focusing on specific statistical methods, such as matching, I focus more on the assumptions needed to give statistical estimates a causal interpretation. Such assumptions are often referred to as identification assumptions, and these assumptions are critical to any statistical analysis about causal effects. I outline a wide range of identification assumptions and highlight the design-based approach to causal inference. I conclude with an overview of statistical methods that are frequently used for causal inference.


Sign in / Sign up

Export Citation Format

Share Document