DASH

2014 ◽  
pp. 1432-1449
Author(s):  
Lejla Rovcanin ◽  
Gabriel-Miro Muntean

Multimedia streaming has major commercial potential as the global community of online video viewers is expanding rapidly following the proliferation of low-cost multimedia-enabled mobile devices. These devices enable increasing amounts of video-based content to be acquired, stored, and distributed across existing best effort networks that also carry other traffic types. Although a number of protocols are used for video transfer, a significant portion of the Internet streaming media is currently delivered over Hypertext Transfer Protocol (HTTP). Network congestion is one of the most important issues that affects networking traffic in general and video content delivery. Among the various solutions proposed, adaptive delivery of content according to available network bandwidth was very successful. In this context, the most recent standardisation efforts have focused on the introduction of the Dynamic Adaptive Streaming over HTTP (DASH) (ISO, 2012) standard. DASH offers support for client-based bitrate video streaming adaptation, but as it does not introduce any particular adaptation mechanism, it relies on third party solutions to complement it. This chapter provides an overview of the DASH standard and presents a short survey of currently proposed mechanisms for video adaptation related to DASH. It also introduces the DASH-aware Performance-Oriented Adaptation Agent (dPOAA), which improves user Quality of Experience (QoE) levels by dynamically selecting best performing sources for the delivery of video content. dPOAA, in its functionality, considers the characteristics of the network links connecting clients with video providers. dPOAA can be utilised as a DASH player plugin or in conjunction with the DASH-based performance-oriented Adaptive Video Distribution solution (DAV) (Rovcanin & Muntean, 2013), which considers the local network characteristics, quantity of requested content available locally, and device and user profiles.

Author(s):  
Lejla Rovcanin ◽  
Gabriel-Miro Muntean

Multimedia streaming has major commercial potential as the global community of online video viewers is expanding rapidly following the proliferation of low-cost multimedia-enabled mobile devices. These devices enable increasing amounts of video-based content to be acquired, stored, and distributed across existing best effort networks that also carry other traffic types. Although a number of protocols are used for video transfer, a significant portion of the Internet streaming media is currently delivered over Hypertext Transfer Protocol (HTTP). Network congestion is one of the most important issues that affects networking traffic in general and video content delivery. Among the various solutions proposed, adaptive delivery of content according to available network bandwidth was very successful. In this context, the most recent standardisation efforts have focused on the introduction of the Dynamic Adaptive Streaming over HTTP (DASH) (ISO, 2012) standard. DASH offers support for client-based bitrate video streaming adaptation, but as it does not introduce any particular adaptation mechanism, it relies on third party solutions to complement it. This chapter provides an overview of the DASH standard and presents a short survey of currently proposed mechanisms for video adaptation related to DASH. It also introduces the DASH-aware Performance-Oriented Adaptation Agent (dPOAA), which improves user Quality of Experience (QoE) levels by dynamically selecting best performing sources for the delivery of video content. dPOAA, in its functionality, considers the characteristics of the network links connecting clients with video providers. dPOAA can be utilised as a DASH player plugin or in conjunction with the DASH-based performance-oriented Adaptive Video Distribution solution (DAV) (Rovcanin & Muntean, 2013), which considers the local network characteristics, quantity of requested content available locally, and device and user profiles.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3515
Author(s):  
Sung-Ho Sim ◽  
Yoon-Su Jeong

As the development of IoT technologies has progressed rapidly recently, most IoT data are focused on monitoring and control to process IoT data, but the cost of collecting and linking various IoT data increases, requiring the ability to proactively integrate and analyze collected IoT data so that cloud servers (data centers) can process smartly. In this paper, we propose a blockchain-based IoT big data integrity verification technique to ensure the safety of the Third Party Auditor (TPA), which has a role in auditing the integrity of AIoT data. The proposed technique aims to minimize IoT information loss by multiple blockchain groupings of information and signature keys from IoT devices. The proposed technique allows IoT information to be effectively guaranteed the integrity of AIoT data by linking hash values designated as arbitrary, constant-size blocks with previous blocks in hierarchical chains. The proposed technique performs synchronization using location information between the central server and IoT devices to manage the cost of the integrity of IoT information at low cost. In order to easily control a large number of locations of IoT devices, we perform cross-distributed and blockchain linkage processing under constant rules to improve the load and throughput generated by IoT devices.


2021 ◽  
Author(s):  
Tomasz Hadas ◽  
Grzegorz Marut ◽  
Jan Kapłon ◽  
Witold Rohm

<p>The dynamics of water vapor distribution in the troposphere, measured with Global Navigation Satellite Systems (GNSS), is a subject of weather research and climate studies. With GNSS, remote sensing of the troposphere in Europe is performed continuously and operationally under the E-GVAP (http://egvap.dmi.dk/) program with more than 2000 permanent stations. These data are one of the assimilation system component of mesoscale weather prediction models (10 km scale) for many nations across Europe. However, advancing precise local forecasts for severe weather requires high resolution models and observing system.   Further densification of the tracking network, e.g. in urban or mountain areas, will be costly when considering geodetic-grade equipment. However, the rapid development of GNSS-based applications results in a dynamic release of mass-market GNSS receivers. It has been demonstrated that post-processing of GPS-data from a dual-frequency low-cost receiver allows retrieving ZTD with high accuracy. Although low-cost receivers are a promising solution to the problem of densifying GNSS networks for water vapor monitoring, there are still some technological limitations and they require further development and calibration.</p><p>We have developed a low-cost GNSS station, dedicated to real-time GNSS meteorology, which provides GPS, GLONASS and Galileo dual-frequency observations either in RINEX v3.04 format or via RTCM v3.3 stream, with either Ethernet or GSM data transmission. The first two units are deployed in a close vicinity of permanent station WROC, which belongs to the International GNSS Service (IGS) network. Therefore, we compare results from real-time and near real-time processing of GNSS observations from a low-cost unit with IGS Final products. We also investigate the impact of replacing a standard patch antenna with an inexpensive survey-grade antenna. Finally, we deploy a local network of low-cost receivers in and around the city of Wroclaw, Poland, in order to analyze the dynamics of troposphere delay at a very high spatial resolution.</p><p>As a measure of accuracy, we use the standard deviation of ZTD differences between estimated ZTD and IGS Final product. For the near real-time mode, that accuracy is 5 mm and 6 mm, for single- (L1) and dual-frequency (L1/L5,E5b) solution, respectively. Lower accuracy of the dual-frequency relative solution we justify by the missing antenna phase center correction model for L5 and E5b frequencies. With the real-time Precise Point Positioning technique, we estimate ZTD with the accuracy of 7.5 – 8.6 mm. After antenna replacement, the accuracy is improved almost by a factor of 2 (to 4.1 mm), which is close to the 3.1 mm accuracy which we obtain in real-time using data from the WROC station.</p>


Author(s):  
Shrutika Khobragade ◽  
Rohini Bhosale ◽  
Rahul Jiwahe

Cloud Computing makes immense use of internet to store a huge amount of data. Cloud computing provides high quality service with low cost and scalability with less requirement of hardware and software management. Security plays a vital role in cloud as data is handled by third party hence security is the biggest concern to matter. This proposed mechanism focuses on the security issues on the cloud. As the file is stored at a particular location which might get affected due to attack and will lost the data. So, in this proposed work instead of storing a complete file at a particular location, the file is divided into fragments and each fragment is stored at various locations. Fragments are more secured by providing the hash key to each fragment. This mechanism will not reveal all the information regarding a particular file even after successful attack. Here, the replication of fragments is also generated with strong authentication process using key generation. The auto update of a fragment or any file is also done here. The concept of auto update of filles is done where a file or a fragment can be updated online. Instead of downloading the whole file, a fragment can be downloaded to update. More time is saved using this methodology.


Transport ◽  
2015 ◽  
Vol 30 (3) ◽  
pp. 320-329 ◽  
Author(s):  
Erik Wilhelm ◽  
Joshua Siegel ◽  
Simon Mayer ◽  
Leyna Sadamori ◽  
Sohan Dsouza ◽  
...  

We present a novel approach to developing a vehicle communication platform consisting of a low-cost, open-source hardware for moving vehicle data to a secure server, a Web Application Programming Interface (API) for the provision of third-party services, and an intuitive user dashboard for access control and service distribution. The CloudThink infrastructure promotes the commoditization of vehicle telematics data by facilitating easier, flexible, and more secure access. It enables drivers to confidently share their vehicle information across multiple applications to improve the transportation experience for all stakeholders, as well as to potentially monetize their data. The foundations for an application ecosystem have been developed which, taken together with the fair value for driving data and low barriers to entry, will drive adoption of CloudThink as the standard method for projecting physical vehicles into the cloud. The application space initially consists of a few fundamental and important applications (vehicle tethering and remote diagnostics, road-safety monitoring, and fuel economy analysis) but as CloudThink begins to gain widespread adoption, the multiplexing of applications on the same data structure and set will accelerate its adoption.


2020 ◽  
Vol 14 (4) ◽  
pp. 534-546
Author(s):  
Tianyu Li ◽  
Matthew Butrovich ◽  
Amadou Ngom ◽  
Wan Shen Lim ◽  
Wes McKinney ◽  
...  

The proliferation of modern data processing tools has given rise to open-source columnar data formats. These formats help organizations avoid repeated conversion of data to a new format for each application. However, these formats are read-only, and organizations must use a heavy-weight transformation process to load data from on-line transactional processing (OLTP) systems. As a result, DBMSs often fail to take advantage of full network bandwidth when transferring data. We aim to reduce or even eliminate this overhead by developing a storage architecture for in-memory database management systems (DBMSs) that is aware of the eventual usage of its data and emits columnar storage blocks in a universal open-source format. We introduce relaxations to common analytical data formats to efficiently update records and rely on a lightweight transformation process to convert blocks to a read-optimized layout when they are cold. We also describe how to access data from third-party analytical tools with minimal serialization overhead. We implemented our storage engine based on the Apache Arrow format and integrated it into the NoisePage DBMS to evaluate our work. Our experiments show that our approach achieves comparable performance with dedicated OLTP DBMSs while enabling orders-of-magnitude faster data exports to external data science and machine learning tools than existing methods.


2014 ◽  
Vol 6 (2) ◽  
pp. 52-69
Author(s):  
Yueyun Shang ◽  
Dengpan Ye ◽  
Zhuo Wei ◽  
Yajuan Xie

Most of the high definition video content are still produced in a single-layer MPEG-2 format. Multiple-layers Scalable Video Coding (SVC) offers a minor penalty in rate-distortion efficiency when compared to single-layer coding MPEG-2. A scaled version of the original SVC bitstream can easily be extracted by dropping layers from the bitstream. This paper proposes a parallel transcoder from MPEG-2 to SVC video with Graphics Processing Unit (GPU), named PTSVC. The objective of the transcoder is to migrate MPEG-2 format video to SVC format video such that clients with different network bandwidth and terminal devices can seamlessly access video content. Meanwhile, the transcoded SVC videos are encrypted such that only authorized users can access corresponding SVC layers. Using various scalabilities SVC test sequences, experimental results on TM5 and JSVM indicate that PTSVC is a higher efficient transcoding system compared with previous systems and only causes little quality loss.


Sign in / Sign up

Export Citation Format

Share Document