coding efficiency
Recently Published Documents


TOTAL DOCUMENTS

321
(FIVE YEARS 77)

H-INDEX

20
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Divyansh Gupta ◽  
Wiktor Mlynarski ◽  
Olga Symonova ◽  
Jan Svaton ◽  
Maximilian Joesch

Visual systems have adapted to the structure of natural stimuli. In the retina, center-surround receptive fields (RFs) of retinal ganglion cells (RGCs) appear to efficiently encode natural sensory signals. Conventionally, it has been assumed that natural scenes are isotropic and homogeneous; thus, the RF properties are expected to be uniform across the visual field. However, natural scene statistics such as luminance and contrast are not uniform and vary significantly across elevation. Here, by combining theory and novel experimental approaches, we demonstrate that this inhomogeneity is exploited by RGC RFs across the entire retina to increase the coding efficiency. We formulated three predictions derived from the efficient coding theory: (i) optimal RFs should strengthen their surround from the dimmer ground to the brighter sky, (ii) RFs should simultaneously decrease their center size and (iii) RFs centered at the horizon should have a marked surround asymmetry due to a stark contrast drop-off. To test these predictions, we developed a new method to image high-resolution RFs of thousands of RGCs in individual retinas. We found that the RF properties match theoretical predictions, and consistently change their shape from dorsal to the ventral retina, with a distinct shift in the RF surround at the horizon. These effects are observed across RGC subtypes, which were thought to represent visual space homogeneously, indicating that functional retinal streams share common adaptations to visual scenes. Our work shows that RFs of mouse RGCs exploit the non-uniform, panoramic structure of natural scenes at a previously unappreciated scale, to increase coding efficiency.


2021 ◽  
Author(s):  
Nana Shan ◽  
Henglu Wei ◽  
Wei Zhou ◽  
Zhemin Duan

There are a larger number of blocks with all zero-quantized coefficient of the transform and quantization process in HEVC. The coding time of transform and quantization can be greatly reduced by skipping all zero coefficient blocks. All zero coefficient blocks detection algorithms for RDOQ are proposed in this paper. The stair-like thresholds are obtained by statistical analysis, which can speed up all zero coefficient blocks detection for RDOQ to improve the coding efficiency. Experimental results show that it can reduce 40% coding time with negligible loss of BDBR.


2021 ◽  
Vol 17 (14) ◽  
pp. 135-153
Author(s):  
Haval Tariq Sadeeq ◽  
Thamer Hassan Hameed ◽  
Abdo Sulaiman Abdi ◽  
Ayman Nashwan Abdulfatah

Computer images consist of huge data and thus require more memory space. The compressed image requires less memory space and less transmission time. Imaging and video coding technology in recent years has evolved steadily. However, the image data growth rate is far above the compression ratio growth, Considering image and video acquisition system popularization. It is generally accepted, in particular that further improvement of coding efficiency within the conventional hybrid coding system is increasingly challenged. A new and exciting image compression solution is also offered by the deep convolution neural network (CNN), which in recent years has resumed the neural network and achieved significant success both in artificial intelligent fields and in signal processing. In this paper we include a systematic, detailed and current analysis of image compression techniques based on the neural network. Images are applied to the evolution and growth of compression methods based on the neural networks. In particular, the end-to-end frames based on neural networks are reviewed, revealing fascinating explorations of frameworks/standards for next-generation image coding. The most important studies are highlighted and future trends even envisaged in relation to image coding topics using neural networks.


2021 ◽  
Author(s):  
Matthew Tang ◽  
Ehsan Kheradpezhouh ◽  
Conrad Lee ◽  
J Dickinson ◽  
Jason Mattingley ◽  
...  

Abstract The efficiency of sensory coding is affected both by past events (adaptation) and by expectation of future events (prediction). Here we employed a novel visual stimulus paradigm to determine whether expectation influences orientation selectivity in the primary visual cortex. We used two-photon calcium imaging (GCaMP6f) in awake mice viewing visual stimuli with different levels of predictability. The stimuli consisted of sequences of grating stimuli that randomly shifted in orientation or systematically rotated with occasionally unexpected rotations. At the single neuron and population level, there was significantly enhanced orientation-selective response to unexpected visual stimuli through a boost in gain, which was prominent in awake mice but also present to a lesser extent under anesthesia. We implemented a computational model to demonstrate how neuronal responses were best characterized when adaptation and expectation parameters were combined. Our results demonstrated that adaptation and prediction have unique signatures on activity of V1 neurons.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jinchao Zhao ◽  
Yihan Wang ◽  
Qiuwen Zhang

With the development of broadband networks and high-definition displays, people have higher expectations for the quality of video images, which also brings new requirements and challenges to video coding technology. Compared with H.265/High Efficiency Video Coding (HEVC), the latest video coding standard, Versatile Video Coding (VVC), can save 50%-bit rate while maintaining the same subjective quality, but it leads to extremely high encoding complexity. To decrease the complexity, a fast coding unit (CU) size decision method based on Just Noticeable Distortion (JND) and deep learning is proposed in this paper. Specifically, the hybrid JND threshold model is first designed to distinguish smooth, normal, or complex region. Then, if CU belongs to complex area, the Ultra-Spherical SVM (US-SVM) classifiers are trained for forecasting the best splitting mode. Experimental results illustrate that the proposed method can save about 52.35% coding runtime, which can realize a trade-off between the reduction of computational burden and coding efficiency compared with the latest methods.


Author(s):  
Wei Jia ◽  
Li Li ◽  
Zhu Li ◽  
Xiang Zhang ◽  
Shan Liu

The block-based coding structure in the hybrid video coding framework inevitably introduces compression artifacts such as blocking, ringing, and so on. To compensate for those artifacts, extensive filtering techniques were proposed in the loop of video codecs, which are capable of boosting the subjective and objective qualities of reconstructed videos. Recently, neural network-based filters were presented with the power of deep learning from a large magnitude of data. Though the coding efficiency has been improved from traditional methods in High-Efficiency Video Coding (HEVC), the rich features and information generated by the compression pipeline have not been fully utilized in the design of neural networks. Therefore, in this article, we propose the Residual-Reconstruction-based Convolutional Neural Network (RRNet) to further improve the coding efficiency to its full extent, where the compression features induced from bitstream in form of prediction residual are fed into the network as an additional input to the reconstructed frame. In essence, the residual signal can provide valuable information about block partitions and can aid reconstruction of edge and texture regions in a picture. Thus, more adaptive parameters can be trained to handle different texture characteristics. The experimental results show that our proposed RRNet approach presents significant BD-rate savings compared to HEVC and the state-of-the-art CNN-based schemes, indicating that residual signal plays a significant role in enhancing video frame reconstruction.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009566
Author(s):  
René Larisch ◽  
Lorenz Gönner ◽  
Michael Teichmann ◽  
Fred H. Hamker

Visual stimuli are represented by a highly efficient code in the primary visual cortex, but the development of this code is still unclear. Two distinct factors control coding efficiency: Representational efficiency, which is determined by neuronal tuning diversity, and metabolic efficiency, which is influenced by neuronal gain. How these determinants of coding efficiency are shaped during development, supported by excitatory and inhibitory plasticity, is only partially understood. We investigate a fully plastic spiking network of the primary visual cortex, building on phenomenological plasticity rules. Our results suggest that inhibitory plasticity is key to the emergence of tuning diversity and accurate input encoding. We show that inhibitory feedback (random and specific) increases the metabolic efficiency by implementing a gain control mechanism. Interestingly, this led to the spontaneous emergence of contrast-invariant tuning curves. Our findings highlight that (1) interneuron plasticity is key to the development of tuning diversity and (2) that efficient sensory representations are an emergent property of the resulting network.


2021 ◽  
Author(s):  
Jan Homann ◽  
Hyewon Kim ◽  
David W Tank ◽  
Michael J Berry

A notable feature of neural activity is sparseness - namely, that only a small fraction of neurons in a local circuit have high activity at any moment. Not only is sparse neural activity observed experimentally in most areas of the brain, but sparseness has been proposed as an optimization or design principle for neural circuits. Sparseness can increase the energy efficiency of the neu- ral code as well as allow for beneficial computations to be carried out. But how does the brain achieve sparse- ness? Here, we found that when neurons in the primary visual cortex were passively exposed to a set of images over several days, neural responses became more sparse. Sparsification was driven by a decrease in the response of neurons with low or moderate activity, while highly active neurons retained similar responses. We also observed a net decorrelation of neural activity. These changes sculpt neural activity for greater coding efficiency.


2021 ◽  
Author(s):  
Nana Shan ◽  
Henglu Wei ◽  
Wei Zhou ◽  
Zhemin Duan

Transform and quantization are adopted in HEVC. There are lots of all zero coefficient blocks in transform and quantization. By detecting all zero coefficient blocks, the complexity of transform or quantization can be greatly reduced. All zero coefficient blocks for uniform quantizer can be efficiently detected by comparing the float quantization level of the estimated coefficients with an explicit threshold. The experimental result shows that 50% complexity of transform or quantization for uniform quantizer can be reduced with negligible loss of video coding efficiency.


2021 ◽  
Author(s):  
Matthew F Tang ◽  
Ehsan Kheradpezhouh ◽  
Conrad CY Lee ◽  
J Edwin Dickinson ◽  
Jason B Mattingley ◽  
...  

The efficiency of sensory coding is affected both by past events (adaptation) and by expectation of future events (prediction). Here we employed a novel visual stimulus paradigm to determine whether expectation influences orientation selectivity in the primary visual cortex. We used two-photon calcium imaging (GCaMP6f) in awake mice viewing visual stimuli with different levels of predictability. The stimuli consisted of sequences of grating stimuli that randomly shifted in orientation or systematically rotated with occasionally unexpected rotations. At the single neuron and population level, there was significantly enhanced orientation-selective response to unexpected visual stimuli through a boost in gain, which was prominent in awake mice but also present to a lesser extent under anesthesia. We implemented a computational model to demonstrate how neuronal responses were best characterized when adaptation and expectation parameters were combined. Our results demonstrated that adaptation and prediction have unique signatures on activity of V1 neurons.


Sign in / Sign up

Export Citation Format

Share Document