Locally optimum image watermark decoder by modeling NSCT domain difference coefficients with vector based Cauchy distribution

Author(s):  
Xiang-yang Wang ◽  
Si-yu Zhang ◽  
Li Wang ◽  
Hong-ying Yang ◽  
Pan-pan Niu
2010 ◽  
Vol 30 (9) ◽  
pp. 2444-2448
Author(s):  
Ke-ji WANG ◽  
Zhi-wei KANG ◽  
Xin-huan LIU ◽  
Bu-zhen CHEN

Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 56
Author(s):  
Haoyu Niu ◽  
Jiamin Wei ◽  
YangQuan Chen

Stochastic Configuration Network (SCN) has a powerful capability for regression and classification analysis. Traditionally, it is quite challenging to correctly determine an appropriate architecture for a neural network so that the trained model can achieve excellent performance for both learning and generalization. Compared with the known randomized learning algorithms for single hidden layer feed-forward neural networks, such as Randomized Radial Basis Function (RBF) Networks and Random Vector Functional-link (RVFL), the SCN randomly assigns the input weights and biases of the hidden nodes in a supervisory mechanism. Since the parameters in the hidden layers are randomly generated in uniform distribution, hypothetically, there is optimal randomness. Heavy-tailed distribution has shown optimal randomness in an unknown environment for finding some targets. Therefore, in this research, the authors used heavy-tailed distributions to randomly initialize weights and biases to see if the new SCN models can achieve better performance than the original SCN. Heavy-tailed distributions, such as Lévy distribution, Cauchy distribution, and Weibull distribution, have been used. Since some mixed distributions show heavy-tailed properties, the mixed Gaussian and Laplace distributions were also studied in this research work. Experimental results showed improved performance for SCN with heavy-tailed distributions. For the regression model, SCN-Lévy, SCN-Mixture, SCN-Cauchy, and SCN-Weibull used less hidden nodes to achieve similar performance with SCN. For the classification model, SCN-Mixture, SCN-Lévy, and SCN-Cauchy have higher test accuracy of 91.5%, 91.7% and 92.4%, respectively. Both are higher than the test accuracy of the original SCN.


2021 ◽  
Vol 48 (3) ◽  
pp. 91-96
Author(s):  
Shigeo Shioda

The consensus achieved in the consensus-forming algorithm is not generally a constant but rather a random variable, even if the initial opinions are the same. In the present paper, we investigate the statistical properties of the consensus in a broadcasting-based consensus-forming algorithm. We focus on two extreme cases: consensus forming by two agents and consensus forming by an infinite number of agents. In the two-agent case, we derive several properties of the distribution function of the consensus. In the infinite-numberof- agents case, we show that if the initial opinions follow a stable distribution, then the consensus also follows a stable distribution. In addition, we derive a closed-form expression of the probability density function of the consensus when the initial opinions follow a Gaussian distribution, a Cauchy distribution, or a L´evy distribution.


Author(s):  
Xiang-yang Wang ◽  
Xin Shen ◽  
Jia-lin Tian ◽  
Pan-pan Niu ◽  
Hong-ying Yang

2022 ◽  
Vol 65 ◽  
pp. 103105
Author(s):  
Xiang-yang Wang ◽  
Xin Shen ◽  
Jia-lin Tian ◽  
Pan-pan Niu ◽  
Hong-ying Yang

Sign in / Sign up

Export Citation Format

Share Document