Application of Virtools in Virtual Campus Roaming

2013 ◽  
Vol 380-384 ◽  
pp. 2732-2735
Author(s):  
Min Zeng

In the campus roaming, 3D scene is modeled using the 3ds Max .After the completion of texture, baking, output and other related operations, System is designed and developed using virtools. The 3D modeling and interactive function of campus was introduced, and the system optimization and other related techniques are discussed. Taking Chengdu Polytechnic as an example, the design of a real-time interactive 3D virtual campus is completed.

Author(s):  
Gaurav Chaurasia ◽  
Arthur Nieuwoudt ◽  
Alexandru-Eugen Ichim ◽  
Richard Szeliski ◽  
Alexander Sorkine-Hornung

We present an end-to-end system for real-time environment capture, 3D reconstruction, and stereoscopic view synthesis on a mobile VR headset. Our solution allows the user to use the cameras on their VR headset as their eyes to see and interact with the real world while still wearing their headset, a feature often referred to as Passthrough. The central challenge when building such a system is the choice and implementation of algorithms under the strict compute, power, and performance constraints imposed by the target user experience and mobile platform. A key contribution of this paper is a complete description of a corresponding system that performs temporally stable passthrough rendering at 72 Hz with only 200 mW power consumption on a mobile Snapdragon 835 platform. Our algorithmic contributions for enabling this performance include the computation of a coarse 3D scene proxy on the embedded video encoding hardware, followed by a depth densification and filtering step, and finally stereoscopic texturing and spatio-temporal up-sampling. We provide a detailed discussion and evaluation of the challenges we encountered, as well as algorithm and performance trade-offs in terms of compute and resulting passthrough quality.;AB@The described system is available to users as the Passthrough+ feature on Oculus Quest. We believe that by publishing the underlying system and methods, we provide valuable insights to the community on how to design and implement real-time environment sensing and rendering on heavily resource constrained hardware.


Fast track article for IS&T International Symposium on Electronic Imaging 2021: Imaging and Multimedia Analytics in a Web and Mobile World 2021 proceedings.


2018 ◽  
pp. 31-63 ◽  
Author(s):  
Lukáš Herman ◽  
Tomáš Řezník ◽  
Zdeněk Stachoň ◽  
Jan Russnák

Various widely available applications such as Google Earth have made interactive 3D visualizations of spatial data popular. While several studies have focused on how users perform when interacting with these with 3D visualizations, it has not been common to record their virtual movements in 3D environments or interactions with 3D maps. We therefore created and tested a new web-based research tool: a 3D Movement and Interaction Recorder (3DmoveR). Its design incorporates findings from the latest 3D visualization research, and is built upon an iterative requirements analysis. It is implemented using open web technologies such as PHP, JavaScript, and the X3DOM library. The main goal of the tool is to record camera position and orientation during a user’s movement within a virtual 3D scene, together with other aspects of their interaction. After building the tool, we performed an experiment to demonstrate its capabilities. This experiment revealed differences between laypersons and experts (cartographers) when working with interactive 3D maps. For example, experts achieved higher numbers of correct answers in some tasks, had shorter response times, followed shorter virtual trajectories, and moved through the environment more smoothly. Interaction-based clustering as well as other ways of visualizing and qualitatively analyzing user interaction were explored.


2021 ◽  
Author(s):  
Zhongyu Zhang ◽  
Zhenjie Zhu ◽  
Jinsheng Zhang ◽  
Jingkun Wang

Abstract With the drastic development of the globally advanced manufacturing industry, transition of the original production pattern from traditional industries to advanced intelligence is completed with the least delay possible, which are still facing new challenges. Because the timeliness, stability and reliability of them is significantly restricted due to lack of the real-time communication. Therefore, an intelligent workshop manufacturing system model framework based on digital twin is proposed in this paper, driving the deep inform integration among the physical entity, data collection, and information decision-making. The conceptual and obscure of the traditional digital twin is refined, optimized, and upgraded on the basis of the four-dimension collaborative model thinking. A refined nine-layer intelligent digital twin model framework is established. Firstly, the physical evaluation is refined into entity layer, auxiliary layer and interface layer, scientifically managing the physical resources as well as the operation and maintenance of the instrument, and coordinating the overall system. Secondly, dividing the data evaluation into the data layer and the processing layer can greatly improve the flexible response-ability and ensure the synchronization of the real-time data. Finally, the system evaluation is subdivided into information layer, algorithm layer, scheduling layer, and functional layer, developing flexible manufacturing plan more reasonably, shortening production cycle, and reducing logistics cost. Simultaneously, combining SLP and artificial bee colony are applied to investigate the production system optimization of the textile workshop. The results indicate that the production efficiency of the optimized production system is increased by 34.46%.


1994 ◽  
Vol 18 (4) ◽  
pp. 499-506 ◽  
Author(s):  
Jiandong Liang ◽  
Mark Green
Keyword(s):  

Author(s):  
Wilbert G. Aguilar ◽  
Guillermo A. Rodríguez ◽  
Leandro Álvarez ◽  
Sebastián Sandoval ◽  
Fernando Quisaguano ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document