Video analytics, computer vision, and virtual reality technologies

2021 ◽  
pp. 221-231
Author(s):  
Richard Busulwa ◽  
Nina Evans
Author(s):  
Richard Busulwa ◽  
Nina Evans ◽  
Aaron Oh ◽  
Moon Kang

Author(s):  
Arthur C. Depoian ◽  
Lorenzo E. Jaques ◽  
Dong Xie ◽  
Colleen P. Bailey ◽  
Parthasarathy Guturu

2021 ◽  
Vol 18 (6) ◽  
pp. 7936-7954
Author(s):  
Ziyou Zhuang ◽  

<abstract> <p>The 5G virtual reality system needs to interact with the user to draw the scene in real time. The contradiction between the complexity of the scene model and the real-time interaction is the main problem in the operation of the virtual reality system. The model optimization strategy of architectural scene in virtual reality design is studied, and the method of architectural scene model optimization is summarized. This article aims to study the optimization of computer vision software modeling through 5G virtual reality technology. In this paper, the optimization of the architectural model is studied by the method of image gray scale transformation, computer vision detection technology and virtual modeling technology. The four experiments are comprehensive evaluation and quantitative evaluation, comparison of channel estimation performance of different pilot structures, comparison of calculated and true values of external azimuth elements, and the effect of window-to-wall ratio on energy consumption per unit of residential building. The results show that hollow bricks of building materials have a great impact on the environment. The values of the three pixel coordinates X, Y, and Z calculated by the unit quaternion method are 1.27, 1.3, and -6.11, respectively, while the actual coordinate positions are 1.25, 1.37, and -6.22, respectively. It can be seen that the outer orientation element value calculated by the quaternion-based spatial rear intersection method is not much different from the actual value, and the correct result can be accurately calculated.</p> </abstract>


2019 ◽  
Author(s):  
◽  
Dmitrii Yurievich Chemodanov

In the event of natural or man-made disasters, geospatial video analytics is valuable to provide situational awareness that can be extremely helpful for first responders. However, geospatial video analytics demands massive imagery/video data 'collection' from Internet-of-Things (IoT) and their seamless 'computation/consumption' within a geo-distributed (edge/core) cloud infrastructure in order to cater to user Quality of Experience (QoE) expectations. Thus, the edge computing needs to be designed with a reliable performance while interfacing with the core cloud to run computer vision algorithms. This is because infrastructure edges near locations generating imagery/video content are rarely equipped with high-performance computation capabilities. This thesis addresses challenges of interfacing edge and core cloud computing within the geo-distributed infrastructure as a novel 'function-centric computing' paradigm that brings new insights to computer vision, edge routing and network virtualization areas. Specifically, we detail the state-of-the-art techniques and illustrate our new/improved solution approaches based on function-centric computing for the two problems of: (i) high-throughput data collection from IoT devices at the wireless edge, and (ii) seamless data computation/consumption within the geo-distributed (edge/core) cloud infrastructure. To address (i), we present a novel deep learning-augmented geographic edge routing that relies on physical area knowledge obtained from satellite imagery. To address (ii), we describe a novel reliable service chain orchestration framework that builds upon microservices and utilizes a novel 'metapath composite variable' approach supported by a constrained-shortest path finder. Finally, we show both analytically and empirically, how our geographic routing, constrained shortest path finder and reliable service chain orchestration approaches that compose our function-centric computing framework are superior than many traditional and state-of-the-art techniques. As a result, we can significantly speedup (up to 4 times) data-intensive computing at infrastructure edges fostering effective disaster relief coordination to save lives.


2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Carlos Hitoshi Morimoto ◽  
Flávio Coutinho ◽  
Jefferson Silva ◽  
Silvia Ghirotti ◽  
Thiago Santos

This paper introduces the Laboratory of Technologies for Interaction(LaTIn) and briefly describes its current main projects. The mainfocus of LaTInhas been developing new ways of human-machineinteraction using computer vision techniques. The projects arecathegorized according to the distance between the human user and themachine being operated. For close distances, appropriate forinteraction with desktop computers for example, we have developed eye-gazebased interfaces. We have also built hand and body gestures interfacesappropriate for kiosks and virtual reality settings and, for largedistances, we have developed novel multiple people tracking techniquesthat have been used for surveillance and monitoring applications.


Sign in / Sign up

Export Citation Format

Share Document