squared error loss function
Recently Published Documents


TOTAL DOCUMENTS

78
(FIVE YEARS 46)

H-INDEX

6
(FIVE YEARS 3)

2022 ◽  
Vol 4 ◽  
Author(s):  
Ying-Ying Zhang ◽  
Teng-Zhong Rong ◽  
Man-Man Li

For the normal model with a known mean, the Bayes estimation of the variance parameter under the conjugate prior is studied in Lehmann and Casella (1998) and Mao and Tang (2012). However, they only calculate the Bayes estimator with respect to a conjugate prior under the squared error loss function. Zhang (2017) calculates the Bayes estimator of the variance parameter of the normal model with a known mean with respect to the conjugate prior under Stein’s loss function which penalizes gross overestimation and gross underestimation equally, and the corresponding Posterior Expected Stein’s Loss (PESL). Motivated by their works, we have calculated the Bayes estimators of the variance parameter with respect to the noninformative (Jeffreys’s, reference, and matching) priors under Stein’s loss function, and the corresponding PESLs. Moreover, we have calculated the Bayes estimators of the scale parameter with respect to the conjugate and noninformative priors under Stein’s loss function, and the corresponding PESLs. The quantities (prior, posterior, three posterior expectations, two Bayes estimators, and two PESLs) and expressions of the variance and scale parameters of the model for the conjugate and noninformative priors are summarized in two tables. After that, the numerical simulations are carried out to exemplify the theoretical findings. Finally, we calculate the Bayes estimators and the PESLs of the variance and scale parameters of the S&P 500 monthly simple returns for the conjugate and noninformative priors.


Author(s):  
Aijaz Ahmad ◽  
Rajnee Tripathi

In this study, the shape parameter of the weighted Inverse Maxwell distribution is estimated by employing Bayesian techniques. To produce posterior distributions, the extended Jeffery's prior and the Erlang prior are utilised. The estimators are derived from the squared error loss function, the entropy loss function, the precautionary loss function, and the Linex loss function. Furthermore, an actual data set is studied to assess the effectiveness of various estimators under distinct loss functions.


Author(s):  
R. M. Refaey ◽  
G. R. AL-Dayian ◽  
A. A. EL-Helbawy ◽  
A. A. EL-Helbawy

In this paper, bivariate compound exponentiated survival function of the Lomax distribution is constructed based on the technique considered by AL-Hussaini (2011). Some properties of the distribution are derived. Maximum likelihood estimation and prediction of the future observations are considered. Also, Bayesian estimation and prediction are studied under squared error loss function. The performance of the proposed bivariate distribution is examined using a simulation study. Finally, a real data set is analyzed under the proposed distribution to illustrate its flexibility for real-life application.


Author(s):  
Madhumitha J. ◽  
G. Vijayalakshmi

In the efficient design and functionality of complex systems, redundancy problems in systems play a key role. The consecutive-k-out-of-n:F structure, which has broad application in street light arrangements, vacuum systems in an accelerator, sliding window detection, relay stations for a communication system. Availability is one of the significant measures for a maintained device because availability accounts for the repair capability. A very significant feature is the steady-state availability of a repairable device. For the repairable consecutive k-out-of-n:F system with independent and identically distributed components, the Bayesian point estimate (B.P.E) of steady-state availability under squared error loss function (SELF) and confidence interval are obtained.


2021 ◽  
Vol 21 (No.1) ◽  
pp. 1-25
Author(s):  
Amal Soliman Hassan ◽  
Elsayed Ahmed Elsherpieny ◽  
Rokaya Elmorsy Mohamed

The measure of entropy has an undeniable pivotal role in the field of information theory. This article estimates the Rényi and q-entropies of the power function distribution in the presence of s outliers. The maximum likelihood estimators as well as the Bayesian estimators under uniform and gamma priors are derived. The proposed Bayesian estimators of entropies under symmetric and asymmetric loss functions are obtained. These estimators are computed empirically using Monte Carlo simulation based on Gibbs sampling. Outcomes of the study showed that the precision of the maximum likelihood and Bayesian estimates of both entropies measures improves with sample sizes. The behavior of both entropies estimates increase with number of outliers. Further, Bayesian estimates of the Rényi and q-entropies under squared error loss function are preferable than the other Bayesian estimates under the other loss functions in most of cases. Eventually, real data examples are analyzed to illustrate the theoretical results.


2021 ◽  
Author(s):  
Komuravelli Prashanth ◽  
Kalidas Yeturu

<div>There are millions of scanned documents worldwide in around 4 thousand languages. Searching for information in a scanned document requires a text layer to be available and indexed. Preparation of a text layer requires recognition of character and sub-region patterns and associating with a human interpretation. Developing an optical character recognition (OCR) system for each and every language is a very difficult task if not impossible. There is a strong need for systems that add on top of the existing OCR technologies by learning from them and unifying disparate multitude of many a system. In this regard, we propose an algorithm that leverages the fact that we are dealing with scanned documents of handwritten text regions from across diverse domains and language settings. We observe that the text regions have consistent bounding box sizes and any large font or tiny font scenarios can be handled in preprocessing or postprocessing phases. The image subregions are smaller in size in scanned text documents compared to subregions formed by common objects in general purpose images. We propose and validate the hypothesis that a much simpler convolution neural network (CNN) having very few layers and less number of filters can be used for detecting individual subregion classes. For detection of several hundreds of classes, multiple such simpler models can be pooled to operate simultaneously on a document. The advantage of going by pools of subregion specific models is the ability to deal with incremental addition of hundreds of newer classes over time, without disturbing the previous models in the continual learning scenario. Such an approach has distinctive advantage over using a single monolithic model where subregions classes share and interfere via a bulky common neural network. We report here an efficient algorithm for building a subregion specific lightweight CNN models. The training data for the CNN proposed, requires engineering synthetic data points that consider both pattern of interest and non-patterns as well. We propose and validate the hypothesis that an image canvas in which optimal amount of pattern and non-pattern can be formulated using a means squared error loss function to influence filter for training from the data. The CNN hence trained has the capability to identify the character-object in presence of several other objects on a generalized test image of a scanned document. In this setting some of the key observations are in a CNN, learning a filter depends not only on the abundance of patterns of interest but also on the presence of a non-pattern context. Our experiments have led to some of the key observations - (i) a pattern cannot be over-expressed in isolation, (ii) a pattern cannot be under-xpressed as well, (iii) a non-pattern can be of salt and pepper type noise and finally (iv) it is sufficient to provide a non-pattern context to a modest representation of a pattern to result in strong individual sub-region class models. We have carried out studies and reported \textit{mean average precision} scores on various data sets including (1) MNIST digits(95.77), (2) E-MNIST capital alphabet(81.26), (3) EMNIST small alphabet(73.32) (4) Kannada digits(95.77), (5) Kannada letters(90.34), (6) Devanagari letters(100) (7) Telugu words(93.20) (8) Devanagari words(93.20) and also on medical prescriptions and observed high-performance metrics of mean average precision over 90%. The algorithm serves as a kernel in the automatic annotation of digital documents in diverse scenarios such as annotation of ancient manuscripts and hand-written health records.</div>


2021 ◽  
Author(s):  
Komuravelli Prashanth ◽  
Kalidas Yeturu

<div>There are millions of scanned documents worldwide in around 4 thousand languages. Searching for information in a scanned document requires a text layer to be available and indexed. Preparation of a text layer requires recognition of character and sub-region patterns and associating with a human interpretation. Developing an optical character recognition (OCR) system for each and every language is a very difficult task if not impossible. There is a strong need for systems that add on top of the existing OCR technologies by learning from them and unifying disparate multitude of many a system. In this regard, we propose an algorithm that leverages the fact that we are dealing with scanned documents of handwritten text regions from across diverse domains and language settings. We observe that the text regions have consistent bounding box sizes and any large font or tiny font scenarios can be handled in preprocessing or postprocessing phases. The image subregions are smaller in size in scanned text documents compared to subregions formed by common objects in general purpose images. We propose and validate the hypothesis that a much simpler convolution neural network (CNN) having very few layers and less number of filters can be used for detecting individual subregion classes. For detection of several hundreds of classes, multiple such simpler models can be pooled to operate simultaneously on a document. The advantage of going by pools of subregion specific models is the ability to deal with incremental addition of hundreds of newer classes over time, without disturbing the previous models in the continual learning scenario. Such an approach has distinctive advantage over using a single monolithic model where subregions classes share and interfere via a bulky common neural network. We report here an efficient algorithm for building a subregion specific lightweight CNN models. The training data for the CNN proposed, requires engineering synthetic data points that consider both pattern of interest and non-patterns as well. We propose and validate the hypothesis that an image canvas in which optimal amount of pattern and non-pattern can be formulated using a means squared error loss function to influence filter for training from the data. The CNN hence trained has the capability to identify the character-object in presence of several other objects on a generalized test image of a scanned document. In this setting some of the key observations are in a CNN, learning a filter depends not only on the abundance of patterns of interest but also on the presence of a non-pattern context. Our experiments have led to some of the key observations - (i) a pattern cannot be over-expressed in isolation, (ii) a pattern cannot be under-xpressed as well, (iii) a non-pattern can be of salt and pepper type noise and finally (iv) it is sufficient to provide a non-pattern context to a modest representation of a pattern to result in strong individual sub-region class models. We have carried out studies and reported \textit{mean average precision} scores on various data sets including (1) MNIST digits(95.77), (2) E-MNIST capital alphabet(81.26), (3) EMNIST small alphabet(73.32) (4) Kannada digits(95.77), (5) Kannada letters(90.34), (6) Devanagari letters(100) (7) Telugu words(93.20) (8) Devanagari words(93.20) and also on medical prescriptions and observed high-performance metrics of mean average precision over 90%. The algorithm serves as a kernel in the automatic annotation of digital documents in diverse scenarios such as annotation of ancient manuscripts and hand-written health records.</div>


2021 ◽  
Author(s):  
Komuravelli Prashanth ◽  
Kalidas Yeturu

<div>There are millions of scanned documents worldwide in around 4 thousand languages. Searching for information in a scanned document requires a text layer to be available and indexed. Preparation of a text layer requires recognition of character and sub-region patterns and associating with a human interpretation. Developing an optical character recognition (OCR) system for each and every language is a very difficult task if not impossible. There is a strong need for systems that add on top of the existing OCR technologies by learning from them and unifying disparate multitude of many a system. In this regard, we propose an algorithm that leverages the fact that we are dealing with scanned documents of handwritten text regions from across diverse domains and language settings. We observe that the text regions have consistent bounding box sizes and any large font or tiny font scenarios can be handled in preprocessing or postprocessing phases. The image subregions are smaller in size in scanned text documents compared to subregions formed by common objects in general purpose images. We propose and validate the hypothesis that a much simpler convolution neural network (CNN) having very few layers and less number of filters can be used for detecting individual subregion classes. For detection of several hundreds of classes, multiple such simpler models can be pooled to operate simultaneously on a document. The advantage of going by pools of subregion specific models is the ability to deal with incremental addition of hundreds of newer classes over time, without disturbing the previous models in the continual learning scenario. Such an approach has distinctive advantage over using a single monolithic model where subregions classes share and interfere via a bulky common neural network. We report here an efficient algorithm for building a subregion specific lightweight CNN models. The training data for the CNN proposed, requires engineering synthetic data points that consider both pattern of interest and non-patterns as well. We propose and validate the hypothesis that an image canvas in which optimal amount of pattern and non-pattern can be formulated using a means squared error loss function to influence filter for training from the data. The CNN hence trained has the capability to identify the character-object in presence of several other objects on a generalized test image of a scanned document. In this setting some of the key observations are in a CNN, learning a filter depends not only on the abundance of patterns of interest but also on the presence of a non-pattern context. Our experiments have led to some of the key observations - (i) a pattern cannot be over-expressed in isolation, (ii) a pattern cannot be under-xpressed as well, (iii) a non-pattern can be of salt and pepper type noise and finally (iv) it is sufficient to provide a non-pattern context to a modest representation of a pattern to result in strong individual sub-region class models. We have carried out studies and reported \textit{mean average precision} scores on various data sets including (1) MNIST digits(95.77), (2) E-MNIST capital alphabet(81.26), (3) EMNIST small alphabet(73.32) (4) Kannada digits(95.77), (5) Kannada letters(90.34), (6) Devanagari letters(100) (7) Telugu words(93.20) (8) Devanagari words(93.20) and also on medical prescriptions and observed high-performance metrics of mean average precision over 90%. The algorithm serves as a kernel in the automatic annotation of digital documents in diverse scenarios such as annotation of ancient manuscripts and hand-written health records.</div>


2021 ◽  
Vol 11 (2) ◽  
pp. 1489-1496
Author(s):  
Divya S

The field of medical image reconstruction helps to improve image quality by manipulating image features and artefact with Filtered-Back Propagation for X-ray Computer Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET). This project focuses on detection of tumour cells using Radiomics application that aims to extract extensive quantitative features from magnetic resonance images. In this paper image discretization models and image interpolation techniques are used to segment the MR images and train them for Image Reconstruction. The image based gray level segmentation is carried out for required feature extraction to improve the clustering analysis for segmentation. Convolution Neural Network is used for image classification and recognition because of its high accuracy. The CNN follows a hierarchical model which works on building a network and finally gives out a fully-connected layer where all the neurons are connected to each other and the output is processed. The JPEG approach is a commonly used type of compression of lossy images that centres on the Discrete Cosine Transform. By splitting images into components of varying frequencies, the DCT functions. Finally the output from the Radiomics application is compared with the existing methodology for determining the Mean Squared Error - Loss Function to ensure the image compression quality.


Author(s):  
Bashiru Omeiza Sule ◽  
Taiwo Mobolaji Adegoke ◽  
Kafayat Tolani Uthman

In this paper, Bayes estimators of the unknown shape and scale parameters of the Exponentiated Inverse Rayleigh Distribution (EIRD) have been derived using both the frequentist and bayesian methods. The Bayes theorem was adopted to obtain the posterior distribution of the shape and scale parameters of an Exponentiated Inverse Rayleigh Distribution (EIRD) using both conjugate and non-conjugate prior distribution under different loss functions (such as Entropy Loss Function, Linex Loss Function and Scale Invariant Squared Error Loss Function). The posterior distribution derived for both shape and scale parameters are intractable and a Lindley approximation was adopted to obtain the parameters of interest. The loss function were employed to obtain the estimates for both scale and shape parameters with an assumption that the both scale and shape parameters are unknown and independent. Also the Bayes estimate for the simulated datasets and real life datasets were obtained. The Bayes estimates obtained under dierent loss functions are close to the true parameter value of the shape and scale parameters. The estimators are then compared in terms of their Mean Square Error (MSE) using R programming language. We deduce that the MSE reduces as the sample size (n) increases.


Sign in / Sign up

Export Citation Format

Share Document