No-reference cross-layer video quality estimation model over wireless networks

Author(s):  
Yan Yang
Author(s):  
Saman Zadtootaghaj ◽  
Nabajeet Barman ◽  
Rakesh Rao Ramachandra Rao ◽  
Steve Goring ◽  
Maria G. Martini ◽  
...  

Author(s):  
Martin Fleury ◽  
Rouzbeh Razavi ◽  
Laith Al-Jobouri ◽  
Salah M. Saleh Al-Majeed ◽  
Mohammed Ghanbari

Because of the impact of noise, interference, fading, and shadowing in a wireless network, there has been a realization that the strict layering of wireline networks may be unsuitable for wireless. It is the volatility over time that demands an adaptive solution and the basis of adaptation must arise by communication of the channel conditions along with the datalink settings. Video communication is particularly vulnerable because, except when reception is decoupled from distribution as in multimedia messaging, there are real-time display and decode deadlines to be met. The predictive nature of video compression also makes it susceptible to temporal error propagation. In this chapter, case studies from the authors’ experiences with broadband wireless access networks and personal area wireless networks serve to illustrate how information exchange across the layers can benefit received video quality. These schemes are all adaptive and serve as a small sample of a much greater population of cross-layer techniques. Given the importance of multimedia communications as an engine of growth for networked communication, “cross-layer” should be the first consideration in designing a video application.


Author(s):  
Jose Joskowicz ◽  
J. Carlos López Ardao ◽  
Rafael Sotelo

In this paper we present an enhancement to the video quality estimation model described in ITU-T Recommendation G.1070 “Opinion model for video-telephony applications”, in order to include the impact of video content, for different display sizes and codecs. This enhancement provides a much better approximation of the model results with respect to the perceptual MOS values for a wide range of video contents. SAD (Sum of Absolute Differences) is used as an estimation of the video spatial-temporal activity, and is included as a new parameter in the model. The results are based on more than 1500 processed video clips, coded in MPEG-2 and H.264/AVC, in bit rate ranges from 50 kb/s to 12 Mb/s, in SD, VGA, CIF and QCIF display formats.


Sign in / Sign up

Export Citation Format

Share Document