shrinkage method
Recently Published Documents


TOTAL DOCUMENTS

64
(FIVE YEARS 14)

H-INDEX

9
(FIVE YEARS 2)

2022 ◽  
Vol 75 ◽  
pp. 102482
Author(s):  
Jiqian Wang ◽  
Xiaofeng He ◽  
Feng Ma ◽  
Pan Li
Keyword(s):  

2021 ◽  
Vol 13 (21) ◽  
pp. 12191
Author(s):  
Michael Binns ◽  
Hafiz Muhammad Uzair Ayub

Various modeling approaches have been suggested for the modeling and simulation of gasification processes. These models allow for the prediction of gasifier performance at different conditions and using different feedstocks from which the system parameters can be optimized to design efficient gasifiers. Complex models require significant time and effort to develop, and they might only be accurate for use with a specific catalyst. Hence, various simpler models have also been developed, including thermodynamic equilibrium models and empirical models, which can be developed and solved more quickly, allowing such models to be used for optimization. In this study, linear and quadratic expressions in terms of the gasifier input value parameters are developed based on linear regression. To identify significant parameters and reduce the complexity of these expressions, a LASSO (least absolute shrinkage and selection operator) shrinkage method is applied together with cross validation. In this way, the significant parameters are revealed and simple models with reasonable accuracy are obtained.


2021 ◽  
Vol 13 (20) ◽  
pp. 4115
Author(s):  
Ke Tan ◽  
Xingyu Lu ◽  
Jianchao Yang ◽  
Weimin Su ◽  
Hong Gu

Super-resolution technology is considered as an efficient approach to promote the image quality of forward-looking imaging radar. However, super-resolution technology is inherently an ill-conditioned issue, whose solution is quite susceptible to noise. Bayesian method can efficiently alleviate this issue through utilizing prior knowledge of the imaging process, in which the scene prior information plays a pretty significant role in ensuring the imaging accuracy. In this paper, we proposed a novel Bayesian super-resolution method on the basis of Markov random field (MRF) model. Compared with the traditional super-resolution method which is focused on one-dimensional (1-D) echo processing, the MRF model adopted in this study strives to exploit the two-dimensional (2-D) prior information of the scene. By using the MRF model, the 2-D spatial structural characteristics of the imaging scene can be well described and utilized by the nth-order neighborhood system. Then, the imaging objective function can be constructed through the maximum a posterior (MAP) framework. Finally, an accelerated iterative threshold/shrinkage method is utilized to cope with the objective function. Validation experiments using both synthetic echo and measured data are designed, and results demonstrate that the new MAP-MRF method exceeds other benchmarking approaches in terms of artifacts suppression and contour recovery.


2021 ◽  
pp. 1-20
Author(s):  
Xun Pang ◽  
Licheng Liu ◽  
Yiqing Xu

Abstract This paper proposes a Bayesian alternative to the synthetic control method for comparative case studies with a single or multiple treated units. We adopt a Bayesian posterior predictive approach to Rubin’s causal model, which allows researchers to make inferences about both individual and average treatment effects on treated observations based on the empirical posterior distributions of their counterfactuals. The prediction model we develop is a dynamic multilevel model with a latent factor term to correct biases induced by unit-specific time trends. It also considers heterogeneous and dynamic relationships between covariates and the outcome, thus improving precision of the causal estimates. To reduce model dependency, we adopt a Bayesian shrinkage method for model searching and factor selection. Monte Carlo exercises demonstrate that our method produces more precise causal estimates than existing approaches and achieves correct frequentist coverage rates even when sample sizes are small and rich heterogeneities are present in data. We illustrate the method with two empirical examples from political economy.


Computation ◽  
2021 ◽  
Vol 9 (7) ◽  
pp. 77
Author(s):  
Oleksandr Terentiev ◽  
Tatyana Prosiankina-Zharova ◽  
Volodymyr Savastiyanov ◽  
Valerii Lakhno ◽  
Vira Kolmakova

The article describes the original information technology of the algorithmic trading, designed to solve the problem of forming the optimal portfolio of trade strategies. The methodology of robust optimization, using the Ledoit–Wolf shrinkage method for obtaining stable estimates of the covariance matrix of algorithmic strategies, was used for the formation of a portfolio of trade strategies. The corresponding software was implemented by SAS OPTMODEL Procedure. The paper deals with a portfolio of trade strategies built for highly-profitable, but also highly risky financial tools—cryptocurrencies. Available bitcoin assets were divided into a corresponding proportion for each of the recommended portfolio strategies, and during the selected period (one calendar month) were used for this research. The portfolio of trade strategies is rebuilt at the end of the period (every month) based on the results of trade during the period, in accordance with the conditions of risk minimizing or income maximizing. Trading strategies work in parallel, being in a state of waiting for a relevant trading signal. Strategies can be changed by moving the parameters in accordance with the current state of the financial market, removed if ineffective, and replaced where necessary. The efficiency of using a robust decision-making method in the context of uncertainty regarding cryptocurrency trading was confirmed by the results of real trading for the Bitcoin/Dollar pair. Implementation of the offered information technology in electronic trading systems will allow risk reduction as a result of making incorrect decisions or delays in making decisions in a systemic trading.


2020 ◽  
pp. 096228022094573
Author(s):  
Zhenxun Wang ◽  
Lifeng Lin ◽  
James S Hodges ◽  
Richard MacLehose ◽  
Haitao Chu

Network meta-analysis is a commonly used tool to combine direct and indirect evidence in systematic reviews of multiple treatments to improve estimation compared to traditional pairwise meta-analysis. Unlike the contrast-based network meta-analysis approach, which focuses on estimating relative effects such as odds ratios, the arm-based network meta-analysis approach can estimate absolute risks and other effects, which are arguably more informative in medicine and public health. However, the number of clinical studies involving each treatment is often small in a network meta-analysis, leading to unstable treatment-specific variance estimates in the arm-based network meta-analysis approach when using non- or weakly informative priors under an unequal variance assumption. Additional assumptions, such as equal (i.e. homogeneous) variances for all treatments, may be used to remedy this problem, but such assumptions may be inappropriately strong. This article introduces a variance shrinkage method for an arm-based network meta-analysis. Specifically, we assume different treatment variances share a common prior with unknown hyperparameters. This assumption is weaker than the homogeneous variance assumption and improves estimation by shrinking the variances in a data-dependent way. We illustrate the advantages of the variance shrinkage method by reanalyzing a network meta-analysis of organized inpatient care interventions for stroke. Finally, comprehensive simulations investigate the impact of different variance assumptions on statistical inference, and simulation results show that the variance shrinkage method provides better estimation for log odds ratios and absolute risks.


Author(s):  
Fangzhou Xie

Recent proposal of Wasserstein Index Generation model (WIG) has shown a new direction for automatically generating indices. However, it is challenging in practice to fit large datasets for two reasons. First, the Sinkhorn distance is notoriously expensive to compute and suffers from dimensionality severely. Second, it requires to compute a full N × N matrix to be fit into memory, where N is the dimension of vocabulary. When the dimensionality is too large, it is even impossible to compute at all. I hereby propose a Lasso-based shrinkage method to reduce dimensionality for the vocabulary as a pre-processing step prior to fittig the WIG model. After we get the word embedding from Word2Vec model, we could cluster these high-dimensional vectors by k-means clustering, and pick most frequent tokens within each cluster to form the “base vocabulary”. Non-base tokens are then regressed on the vectors of base token to get a transformation weight and we could thus represent the whole vocabulary by only the “base tokens”. This variant, called pruned WIG (pWIG), will enable us to shrink vocabulary dimension at will but could still achieve high accuracy. I also provide a wigpy module in Python to carry out computation in both flavor. Application to Economic Policy Uncertainty (EPU) index is showcased as comparison with existing methods of generating time-series sentiment indices.


2020 ◽  
Vol 17 (4) ◽  
pp. 1818-1825
Author(s):  
S. Josephine ◽  
S. Murugan

In MR machine, surface coils, especially phased-arrays are used extensively for acquiring MR images with high spatial resolution. The signal intensities on images acquired using these coils have a non-uniform map due to coil sensitivity profile. Although these smooth intensity variations have little impact on visual diagnosis, they become critical issues when quantitative information is needed from the images. Sometimes, medical images are captured by low signal to noise ratio (SNR). The low SNR makes it difficult to detect anatomical structures because tissue characterization fails on those images. Hence, denoising are essential processes before further processing or analysis will be conducted. They found that the noise in MR image is of Rician distribution. Hence, general filters cannot be used to remove these types of noises. The linear spatial filtering technique blurs the object boundaries and degrades the sharp details. The existing works proved that Wavelet based works eliminates the noise coefficient that called wavelet thresholding. Wavelet thresholding estimates the noise level from high frequency content and estimates the threshold value by comparing the estimated noisy wavelet coefficient with other wavelet coefficients and eliminate the noisy pixel intensity value. Bayesian Shrinkage rule is one of the widely used methods. It uses for Gaussian type of noise, the proposed method introduced some adaptive technique in Bayesian Shrinkage method to remove Rician type of noises from MRI images. The results were verified using quantitative parameters such as Peak Signal to Noise Ratio (PSNR). The proposed Adaptive Bayesian Shrinkage Method (ABSM) outperformed existing methods.


Sign in / Sign up

Export Citation Format

Share Document