video sensor
Recently Published Documents


TOTAL DOCUMENTS

308
(FIVE YEARS 34)

H-INDEX

20
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xiaolei Chen ◽  
Baoning Cao ◽  
Ishfaq Ahmad

Live virtual reality (VR) streaming (a.k.a., 360-degree video streaming) has become increasingly popular because of the rapid growth of head‐mounted displays and 5G networking deployment. However, the huge bandwidth and the energy required to deliver live VR frames in the wireless video sensor network (WVSN) become bottlenecks, making it impossible for the application to be deployed more widely. To solve the bandwidth and energy challenges, VR video viewport prediction has been proposed as a feasible solution. However, the existing works mainly focuses on the bandwidth usage and prediction accuracy and ignores the resource consumption of the server. In this study, we propose a lightweight neural network-based viewport prediction method for live VR streaming in WVSN to overcome these problems. In particular, we (1) use a compressed channel lightweight network (C-GhostNet) to reduce the parameters of the whole model and (2) use an improved gate recurrent unit module (GRU-ECA) and C-GhostNet to process the video data and head movement data separately to improve the prediction accuracy. To evaluate the performance of our method, we conducted extensive experiments using an open VR user dataset. The experiments results demonstrate that our method achieves significant server resource saving, real-time performance, and high prediction accuracy, while achieving low bandwidth usage and low energy consumption in WVSN, which meets the requirement of live VR streaming.


2021 ◽  
Author(s):  
Feng Wen ◽  
Wenhan Zhao ◽  
Zhoujian Chu ◽  
Guoqi Zhang ◽  
Xiang Zhang ◽  
...  

2021 ◽  
Author(s):  
Michael Hubner ◽  
Christoph Wiesmeyr ◽  
Klaus Dittrich ◽  
Bernhard Kohn ◽  
Heinrich Garn ◽  
...  

2021 ◽  
pp. 1-24
Author(s):  
Danilo Avola ◽  
Marco Cascio ◽  
Luigi Cinque ◽  
Gian Luca Foresti ◽  
Daniele Pannone

In recent years, the spread of video sensor networks both in public and private areas has grown considerably. Smart algorithms for video semantic content understanding are increasingly developed to support human operators in monitoring different activities, by recognizing events that occur in the observed scene. With the term event, we refer to one or more actions performed by one or more subjects (e.g., people or vehicles) acting within the same observed area. When these actions are performed by subjects that do not interact with each other, the events are usually classified as simple. Instead, when any kind of interaction occurs among subjects, the involved events are typically classified as complex. This survey starts by providing the formal definitions of both scene and event, and the logical architecture for a generic event recognition system. Subsequently, it presents two taxonomies based on features and machine learning algorithms, respectively, which are used to describe the different approaches for the recognition of events within a video sequence. This paper also discusses key works of the current state-of-the-art of event recognition, providing the list of datasets used to evaluate the performance of reported methods for video content understanding.


Author(s):  
V P Gerasimov ◽  
V D Kovalev ◽  
A Yu Darzhaniya ◽  
R A Magomedov ◽  
E V Sokolova ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document