scholarly journals Fixed-Rate Universal Lossy Source Coding and Model Identification: Connection with Zero-Rate Density Estimation and the Skeleton Estimator

Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 640 ◽  
Author(s):  
Jorge Silva ◽  
Milan Derpich

This work demonstrates a formal connection between density estimation with a data-rate constraint and the joint objective of fixed-rate universal lossy source coding and model identification introduced by Raginsky in 2008 (IEEE TIT, 2008, 54, 3059–3077). Using an equivalent learning formulation, we derive a necessary and sufficient condition over the class of densities for the achievability of the joint objective. The learning framework used here is the skeleton estimator, a rate-constrained learning scheme that offers achievable results for the joint coding and modeling problem by optimally adapting its learning parameters to the specific conditions of the problem. The results obtained with the skeleton estimator significantly extend the context where universal lossy source coding and model identification can be achieved, allowing for applications that move from the known case of parametric collection of densities with some smoothness and learnability conditions to the rich family of non-parametric L 1 -totally bounded densities. In addition, in the parametric case we are able to remove one of the assumptions that constrain the applicability of the original result obtaining similar performances in terms of the distortion redundancy and per-letter rate overhead.




Author(s):  
Haimei Zhao ◽  
Wei Bian ◽  
Bo Yuan ◽  
Dacheng Tao

Scene perceiving and understanding tasks including depth estimation, visual odometry (VO) and camera relocalization are fundamental for applications such as autonomous driving, robots and drones. Driven by the power of deep learning, significant progress has been achieved on individual tasks but the rich correlations among the three tasks are largely neglected. In previous studies, VO is generally accurate in local scope yet suffers from drift in long distances. By contrast, camera relocalization performs well in the global sense but lacks local precision. We argue that these two tasks should be strategically combined to leverage the complementary advantages, and be further improved by exploiting the 3D geometric information from depth data, which is also beneficial for depth estimation in turn. Therefore, we present a collaborative learning framework, consisting of DepthNet, LocalPoseNet and GlobalPoseNet with a joint optimization loss to estimate depth, VO and camera localization unitedly. Moreover, the Geometric Attention Guidance Model is introduced to exploit the geometric relevance among three branches during learning. Extensive experiments demonstrate that the joint learning scheme is useful for all tasks and our method outperforms current state-of-the-art techniques in depth estimation and camera relocalization with highly competitive performance in VO.





2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yang Xu ◽  
Zixi Fu ◽  
Guiyong Xu ◽  
Sicong Zhang ◽  
Xiaoyao Xie

Convolutional neural networks as steganalysis have problems such as poor versatility, long training time, and limited image size. For these problems, we present a heterogeneous kernel residual learning framework called DRHNet—Dual Residual Heterogeneous Network—to save time on the networks during the training phase. Instead of using the image as an input of the network, we extract and merge the images into a feature matrix using the rich model and use the generated feature matrix as the real input of the network. The architecture we proposed has good versatility and can reduce the computation and the number of parameters while still getting higher accuracy. On BOSSbase 1.01, we evaluate the performance of DRHNet in the setting of the spatial domain and frequency domain. The preliminary experimental results show that DRHNet shows excellent steganalysis performance against the state-of-the-art steganographic algorithms.



Sign in / Sign up

Export Citation Format

Share Document