remote rendering
Recently Published Documents


TOTAL DOCUMENTS

76
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 0)

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Viktor Kelkkanen ◽  
Markus Fiedler ◽  
David Lindero

Remote rendering for VR is a technology that enables high-quality VR on low-powered devices. This is realized by offloading heavy computation and rendering to high-powered servers that stream VR as video to the clients. This article focuses on one specific issue in remote rendering when imperfect frame timing between client and server may cause recurring frame drops. We propose a system design that executes synchronously and eliminates the aforementioned problem. The design is presented, and an implementation is tested using various networks and hardware. The design cannot drop frames due to synchronization issues but may on the other hand stall if temporal disturbances occur, e.g., due to network delay spikes or loss. However, experiments confirm that such events can remain rare given an appropriate environment. For example, remote rendering on an intranet at 90 fps with a server located approximately 50 km away yielded just 0.002% stalled frames while rendering with extra latency corresponding to the duration of exactly one frame (11.1 ms at 90 fps). In a LAN without extra latency setting, i.e., with latency equal to locally rendered VR, 0.009% stalls were observed while using a wired Ethernet connection and 0.058% stalls when using 5 GHz wireless IEEE 802.11 ac.


Author(s):  
Abbas Mehrabi ◽  
Matti Siekkinen ◽  
Teemu Kämäräinen ◽  
Antti yl¨-J¨¨ski

The availability of high bandwidth with low-latency communication in 5G mobile networks enables remote rendered real-time virtual reality (VR) applications. Remote rendering of VR graphics in a cloud removes the need for local personal computer for graphics rendering and augments weak graphics processing unit capacity of stand-alone VR headsets. However, to prevent the added network latency of remote rendering from ruining user experience, rendering a locally navigable viewport that is larger than the field of view of the HMD is necessary. The size of the viewport required depends on latency: Longer latency requires rendering a larger viewport and streaming more content. In this article, we aim to utilize multi-access edge computing to assist the backend cloud in such remote rendered interactive VR. Given the dependency between latency and amount and quality of the content streamed, our objective is to jointly optimize the tradeoff between average video quality and delivery latency. Formulating the problem as mixed integer nonlinear programming, we leverage the interpolation between client’s field of view frame size and overall latency to convert the problem to integer nonlinear programming model and then design efficient online algorithms to solve it. The results of our simulations supplemented by real-world user data reveal that enabling a desired balance between video quality and latency, our algorithm particularly achieves the improvements of on average about 22% and 12% in term of video delivery latency and 8% in term of video quality compared to respectively order-of-arrival, threshold-based, and random-location strategies.


Author(s):  
Lu Sun ◽  
Hussein Al Osman ◽  
Jochen Lang

Our augmented reality online assistance platform enables an expert to specify 6DoF movements of a component and apply the geometrical and physical constraints in real-time. We track the real components on the expert’s side to monitor the operations of an expert. We leverage a remote rendering technique that we proposed previously to relieve the rendering burden of the augmented reality end devices. By conducting a user study, we show that the proposed method outperforms conventional instructional videos and sketches. The answers to the questionnaires show that the proposed method receives higher recommendation than sketching, and, compared to conventional instructional videos, is outstanding in terms of instruction clarity, preference, recommendation, and confidence of task completion. Moreover, as to the overall user experience, the proposed method has an advantage over the video method.


Author(s):  
Joohwan Kim ◽  
Pyarelal Knowles ◽  
Josef Spjut ◽  
Ben Boudaoud ◽  
Morgan Mcguire

End-to-end latency in remote-rendering systems can reduce user task performance. This notably includes aiming tasks on game streaming services, which are presently below the standards of competitive first-person desktop gaming. We evaluate the latency-induced penalty on task completion time in a controlled environment and show that it can be significantly mitigated by adopting and modifying image and simulation-warping techniques from virtual reality, eliminating up to 80% of the penalty from 80 ms of added latency. This has potential to enable remote rendering for esports and increase the effectiveness of remote-rendered content creation and robotic teleoperation. We provide full experimental methodology, analysis, implementation details, and source code.


2020 ◽  
Author(s):  
Stefan Zellmann

<div><div><div><p>We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.</p></div></div></div>


2020 ◽  
Author(s):  
Stefan Zellmann

<div><div><div><p>We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.</p></div></div></div>


Sign in / Sign up

Export Citation Format

Share Document