scholarly journals Optimization of Video Cloud Gaming Using Fast HEVC Video Compression Technique

Author(s):  
Mosa Salah ◽  
Ahmad A. Mazhar ◽  
Manar Mizher

Cloud computing is a model of technology that offers access to system resources with advanced level of services ability. These resources are measured reliable, flexible and affordable for several kinds of applications and users. Gaming manufacturing is one filed that expands the profits of cloud computing as numerous new cloud gaming designs have been presented. Many advantages of cloud gaming have exaggerated the success of gaming based on the improvements on traditional online gaming. Though, cloud gaming grieves from several downsides such as the massive amount of needed video processing and the computational complexity required for that. This paper displays the original system drawbacks and develops a new and original algorithm to speed up the encoding process by reduces the computational complexity by exploiting the block type and location. Enhancements on the video codec led to 12.2% speeding up on the over-all encoding time with slight loss of users’ satisfactions. Keywords: Cloud gaming, Computational complexity, Motion estimation, HEVC, Video Encoding

2018 ◽  
Vol 8 (1) ◽  
pp. 38-56 ◽  
Author(s):  
Shailesh D. Kamble ◽  
Sonam T. Khawase ◽  
Nileshsingh V. Thakur ◽  
Akshay V. Patharkar

Motion estimation has traditionally been used in video encoding only, however, it can also be used to solve various real-life problems. Nowadays, researchers from different fields are turning towards motion estimation. Motion estimation has become a serious problem in many video applications. It is a very important part of video compression technique and it provides improved bit rate reduction and coding efficiency. The process of motion estimation is used to improve compression quality and it also reduces computation time. Block-based motion estimation algorithms are used as they require less memory for processing of any video file. It also reduces the complexity involved in computations. In this article, various block-matching motion estimation algorithms are discussed such as Full search (FS) or Exhaust Search, Three-Step search (TSS), New Three-Step search (NTSS), Four-Step search (FSS), Diamond search (DS) etc.


In Video Codecs, The Main Focus Of Researchers Is On Improving Compression Performance To Achieve Higher Compression Rates And To Obtain High Quality Of Video Signals After Encoding At Low Bitrates. There Is Lot Of Satisfactory Research Has Been Done In The Field Video Encoders. Newly Invented HEVC Or H.265 Is A High Efficiency Video Coding Standard Which Improves Video Quality Double For Similar Bit-Rate Than That Of Others Preceders Video Codecs. Here, In This Research Work, We Mainly Focused On Performance And Quality Of Motion JPEG, H.264 And H.265 Using Different Video Encoding Libraries. There Is Lot Of Requirement Of High Efficiency In Video Compression To Handle Complex Computational Video Codecs. Though HEVC Has More Efficiency In Video Compression, Its Cost Is Significant High As Compared To H.264. As Per The Experimentation Conducted, HEVC Shows Best Quality In Video Compression Than That Of H.264. Motion JPEG Required Very Less Time With The Help Of H.264 But, It Generates Worst Encoded Video Quality Using Library Open JPEG. The Encoding Speed Of H.264 Was Slowest Than That Of Other Video Encoders. It Usually Generates Better Video Quality As Compare To Motion JPEG (Kakadu) Encoded Videos. In This Research Paper, We Focused On Video Codec And Its Futuristic Development.


Complexity ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-12
Author(s):  
Sima Ahmadpour ◽  
Tat-Chee Wan ◽  
Zohreh Toghrayee ◽  
Fariba HematiGazafi

Designing an effective and high performance network requires an accurate characterization and modeling of network traffic. The modeling of video frame sizes is normally applied in simulation studies and mathematical analysis and generating streams for testing and compliance purposes. Besides, video traffic assumed as a major source of multimedia traffic in future heterogeneous network. Therefore, the statistical distribution of video data can be used as the inputs for performance modeling of networks. The finding of this paper comprises the theoretical definition of distribution which seems to be relevant to the video trace in terms of its statistical properties and finds the best distribution using both the graphical method and the hypothesis test. The data set used in this article consists of layered video traces generating from Scalable Video Codec (SVC) video compression technique of three different movies.


2018 ◽  
Vol 1 (2) ◽  
pp. 17-23
Author(s):  
Takialddin Al Smadi

This survey outlines the use of computer vision in Image and video processing in multidisciplinary applications; either in academia or industry, which are active in this field.The scope of this paper covers the theoretical and practical aspects in image and video processing in addition of computer vision, from essential research to evolution of application.In this paper a various subjects of image processing and computer vision will be demonstrated ,these subjects are spanned from the evolution of mobile augmented reality (MAR) applications, to augmented reality under 3D modeling and real time depth imaging, video processing algorithms will be discussed to get higher depth video compression, beside that in the field of mobile platform an automatic computer vision system for citrus fruit has been implemented ,where the Bayesian classification with Boundary Growing to detect the text in the video scene. Also the paper illustrates the usability of the handed interactive method to the portable projector based on augmented reality.   © 2018 JASET, International Scholars and Researchers Association


2020 ◽  
Vol 15 (2) ◽  
pp. 144-196 ◽  
Author(s):  
Mohammad R. Khosravi ◽  
Sadegh Samadi ◽  
Reza Mohseni

Background: Real-time video coding is a very interesting area of research with extensive applications into remote sensing and medical imaging. Many research works and multimedia standards for this purpose have been developed. Some processing ideas in the area are focused on second-step (additional) compression of videos coded by existing standards like MPEG 4.14. Materials and Methods: In this article, an evaluation of some techniques with different complexity orders for video compression problem is performed. All compared techniques are based on interpolation algorithms in spatial domain. In details, the acquired data is according to four different interpolators in terms of computational complexity including fixed weights quartered interpolation (FWQI) technique, Nearest Neighbor (NN), Bi-Linear (BL) and Cubic Cnvolution (CC) interpolators. They are used for the compression of some HD color videos in real-time applications, real frames of video synthetic aperture radar (video SAR or ViSAR) and a high resolution medical sample. Results: Comparative results are also described for three different metrics including two reference- based Quality Assessment (QA) measures and an edge preservation factor to achieve a general perception of various dimensions of the mentioned problem. Conclusion: Comparisons show that there is a decidable trade-off among video codecs in terms of more similarity to a reference, preserving high frequency edge information and having low computational complexity.


2016 ◽  
Vol 26 (04) ◽  
pp. 1750054
Author(s):  
M. Kiruba ◽  
V. Sumathy

The Discrete Cosine Transform (DCT) structure plays a significant role in the signal processing applications such as image and video processing applications. In the traditional hardware design, the 8-point DCT architecture contains more number of logical slices in it. Also, it consists of number of multipliers to update the weight. This leads to huge area consumption and power dissipation in that architecture. To mitigate the conventional drawbacks, this paper presents a novel Hierarchical-based Expression (HBE)-Multiple Constant Multiplication (MCM)-based multiplier architecture design for the 8-point DCT structure used in the video CODEC applications. The proposed work involves modified data path architecture and Floating Point Processing Element (FPPE) architecture. Our proposed design of the multipliers and DCT architecture requires minimum number of components when compared to the traditional DCT method. The HBE-MCM-based multiplier architecture includes shifters and adders. The number of Flip-Flops (FFs) and Look Up Tables (LUTs) used in the proposed architecture is reduced. The power consumption is reduced due to the reduction in the size of the components. This design is synthesized in VERILOG code language and implemented in the Field Programmable Gate Array (FPGA). The performance of the proposed architecture is evaluated by comparing it with traditional DCT architecture in terms of the Number of FFs, Number of LUTs, area, power, delay and speed.


Author(s):  
Md Mamunur Rashid

Image Processing in Multimedia Applications treats a number of critical topics in multimedia systems, with respect to image and video processing techniques and their implementations. These techniques include the Image and video compression techniques and standards, and Image and video indexing and retrieval techniques. Image Processing is an important tool to develop a Multimedia system design.


Author(s):  
Hanadi Yahya Darwisho

In view of the increasing use of mobile devices in addition to the high demand for the applications provided by these devices including video streaming, most companies have tended to pay attention to the mobile ad hoc networks and search for solutions to the problems and obstacles that hindered the process of sending video in this type of networks. One of the solutions discussed: Solutions at the level of video compression technology where many of the standards that were used in the process of video encoding and that provide good video quality by using a few bits in the coding process as provided us with acceptable bandwidth for the user and are flexible in handling with errors. These standards are: "H. 264/ MPEG-4 part 10".  There was also a solution at the level of routing during the transmission of video in real time over the mobile ad hoc networks so in this paper has been studied OLSR routing protocol that support the transmission of video on the basis of delay, network load and throughput and evaluated the performance by changing the model where the node is located (Node Placement Model) in a large network and in a small network as well as for different video resolutions.  


Author(s):  
Matheus Silva Gonçalves ◽  
Felipe Carraro ◽  
Rafael Holdorf Lopez

Bridge weight in motion (BWIM) consists in the use of sensors on bridges to assess the loads of passing vehicles. Probabilistic Bridge Weight in Motion (pBWIM) is an approach for solving the inverse problem of finding vehicle axle weights based on deformation information. The pBWIM approach uses a probabilistic influence line and seeks the most probable axle weights, given in-situ measurements. To compute such weights, the original pBWIM employed a grid search, which may lead to computational complexity, specially when applied to vehicles with high number of axles. Hence, this note presents an improved version of pBWIM, modifying how the most probable weights are sough. Here, a gradient based optimization procedure is proposed for replacing the computationally expensive grid-search of the original algorithm. The required gradients are fully derived and validated in numerical examples. The proposed modification is shown to highly decrease the computational complexity of the problem.


Sign in / Sign up

Export Citation Format

Share Document