Video encoding and transcoding using machine learning

Author(s):  
Gerardo Fernandez Escribano ◽  
Rashid Jillani ◽  
Christopher Holder ◽  
Hari Kalva ◽  
Jose Luis Martinez Martinez ◽  
...  
Author(s):  
Yousef O. Sharrab ◽  
Mohammad Alsmirat ◽  
Bilal Hawashin ◽  
Nabil Sarhan

Advancement of the prediction models used in a variety of fields is a result of the contribution of machine learning approaches. Utilizing such modeling in feature engineering is exceptionally imperative and required. In this research, we show how to utilize machine learning to save time in research experiments, where we save more than five thousand hours of measuring the energy consumption of encoding recordings. Since measuring the energy consumption has got to be done by humans and since we require more than eleven thousand experiments to cover all the combinations of video sequences, video bit_rate, and video encoding settings, we utilize machine learning to model the energy consumption utilizing linear regression. VP8 codec has been offered by Google as an open video encoder in an effort to replace the popular MPEG-4 Part 10, known as H.264/AVC video encoder standard. This research model energy consumption and describes the major differences between H.264/AVC and VP8 encoders in terms of energy consumption and performance through experiments that are based on machine learning modeling. Twenty-nine raw video sequences are used, offering a wide range of resolutions and contents, with the frame sizes ranging from QCIF(176x144) to 2160p(3840x2160). For fairness in comparison analysis, we use seven settings in VP8 encoder and fifteen types of tuning in H.264/AVC. The settings cover various video qualities. The performance metrics include video qualities, encoding time, and encoding energy consumption.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Author(s):  
Lorenza Saitta ◽  
Attilio Giordana ◽  
Antoine Cornuejols

Sign in / Sign up

Export Citation Format

Share Document