scholarly journals AudienceMR: Extending the Local Space for Large-Scale Audience with Mixed Reality for Enhanced Remote Lecturer Experience

2021 ◽  
Vol 11 (19) ◽  
pp. 9022
Author(s):  
Bin Han ◽  
Gerard Jounghyun Kim

AudienceMR is designed as a multi-user mixed reality space that seamlessly extends the local user space to become a large, shared classroom where some of the audience members are seen seated in a real space, and more members are seen through an extended portal. AudienceMR can provide a sense of the presence of a large-scale crowd/audience with the associated spatial context. In contrast to virtual reality (VR), however, with mixed reality (MR), a lecturer can deliver content or conduct a performance from a real, actual, comfortable, and familiar local space, while interacting directly with real nearby objects, such as a desk, podium, educational props, instruments, and office materials. Such a design will elicit a realistic user experience closer to an actual classroom, which is currently prohibitive owing to the COVID-19 pandemic. This paper validated our hypothesis by conducting a comparative experiment assessing the lecturer’s experience with two independent variables: (1) an online classroom platform type, i.e., a 2D desktop video teleconference, a 2D video screen grid in VR, 3D VR, and AudienceMR, and (2) a student depiction, i.e., a 2D upper-body video screen and a 3D full-body avatar. Our experiment validated that AudienceMR exhibits a level of anxiety and fear of public speaking closer to that of a real classroom situation, and a higher social and spatial presence than 2D video grid-based solutions and even 3D VR. Compared to 3D VR, AudienceMR offers a more natural and easily usable real object-based interaction. Most subjects preferred AudienceMR over the alternatives despite the nuisance of having to wear a video see-through headset. Such qualities will result in information conveyance and an educational efficacy comparable to those of a real classroom, and better than those achieved through popular 2D desktop teleconferencing or immersive 3D VR solutions.

2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Chih-Chuen Lin ◽  
Phani Motamarri ◽  
Vikram Gavini

AbstractWe present a tensor-structured algorithm for efficient large-scale density functional theory (DFT) calculations by constructing a Tucker tensor basis that is adapted to the Kohn–Sham Hamiltonian and localized in real-space. The proposed approach uses an additive separable approximation to the Kohn–Sham Hamiltonian and an L1 localization technique to generate the 1-D localized functions that constitute the Tucker tensor basis. Numerical results show that the resulting Tucker tensor basis exhibits exponential convergence in the ground-state energy with increasing Tucker rank. Further, the proposed tensor-structured algorithm demonstrated sub-quadratic scaling with system-size for both systems with and without a gap, and involving many thousands of atoms. This reduced-order scaling has also resulted in the proposed approach outperforming plane-wave DFT implementation for systems beyond 2000 electrons.


2021 ◽  
Vol 502 (3) ◽  
pp. 3976-3992
Author(s):  
Mónica Hernández-Sánchez ◽  
Francisco-Shu Kitaura ◽  
Metin Ata ◽  
Claudio Dalla Vecchia

ABSTRACT We investigate higher order symplectic integration strategies within Bayesian cosmic density field reconstruction methods. In particular, we study the fourth-order discretization of Hamiltonian equations of motion (EoM). This is achieved by recursively applying the basic second-order leap-frog scheme (considering the single evaluation of the EoM) in a combination of even numbers of forward time integration steps with a single intermediate backward step. This largely reduces the number of evaluations and random gradient computations, as required in the usual second-order case for high-dimensional cases. We restrict this study to the lognormal-Poisson model, applied to a full volume halo catalogue in real space on a cubical mesh of 1250 h−1 Mpc side and 2563 cells. Hence, we neglect selection effects, redshift space distortions, and displacements. We note that those observational and cosmic evolution effects can be accounted for in subsequent Gibbs-sampling steps within the COSMIC BIRTH algorithm. We find that going from the usual second to fourth order in the leap-frog scheme shortens the burn-in phase by a factor of at least ∼30. This implies that 75–90 independent samples are obtained while the fastest second-order method converges. After convergence, the correlation lengths indicate an improvement factor of about 3.0 fewer gradient computations for meshes of 2563 cells. In the considered cosmological scenario, the traditional leap-frog scheme turns out to outperform higher order integration schemes only when considering lower dimensional problems, e.g. meshes with 643 cells. This gain in computational efficiency can help to go towards a full Bayesian analysis of the cosmological large-scale structure for upcoming galaxy surveys.


2021 ◽  
Vol 13 (3) ◽  
pp. 364
Author(s):  
Han Gao ◽  
Jinhui Guo ◽  
Peng Guo ◽  
Xiuwan Chen

Recently, deep learning has become the most innovative trend for a variety of high-spatial-resolution remote sensing imaging applications. However, large-scale land cover classification via traditional convolutional neural networks (CNNs) with sliding windows is computationally expensive and produces coarse results. Additionally, although such supervised learning approaches have performed well, collecting and annotating datasets for every task are extremely laborious, especially for those fully supervised cases where the pixel-level ground-truth labels are dense. In this work, we propose a new object-oriented deep learning framework that leverages residual networks with different depths to learn adjacent feature representations by embedding a multibranch architecture in the deep learning pipeline. The idea is to exploit limited training data at different neighboring scales to make a tradeoff between weak semantics and strong feature representations for operational land cover mapping tasks. We draw from established geographic object-based image analysis (GEOBIA) as an auxiliary module to reduce the computational burden of spatial reasoning and optimize the classification boundaries. We evaluated the proposed approach on two subdecimeter-resolution datasets involving both urban and rural landscapes. It presented better classification accuracy (88.9%) compared to traditional object-based deep learning methods and achieves an excellent inference time (11.3 s/ha).


2019 ◽  
Vol 12 (1) ◽  
pp. 96 ◽  
Author(s):  
James Brinkhoff ◽  
Justin Vardanega ◽  
Andrew J. Robson

Land cover mapping of intensive cropping areas facilitates an enhanced regional response to biosecurity threats and to natural disasters such as drought and flooding. Such maps also provide information for natural resource planning and analysis of the temporal and spatial trends in crop distribution and gross production. In this work, 10 meter resolution land cover maps were generated over a 6200 km2 area of the Riverina region in New South Wales (NSW), Australia, with a focus on locating the most important perennial crops in the region. The maps discriminated between 12 classes, including nine perennial crop classes. A satellite image time series (SITS) of freely available Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 multispectral imagery was used. A segmentation technique grouped spectrally similar adjacent pixels together, to enable object-based image analysis (OBIA). K-means unsupervised clustering was used to filter training points and classify some map areas, which improved supervised classification of the remaining areas. The support vector machine (SVM) supervised classifier with radial basis function (RBF) kernel gave the best results among several algorithms trialled. The accuracies of maps generated using several combinations of the multispectral and radar bands were compared to assess the relative value of each combination. An object-based post classification refinement step was developed, enabling optimization of the tradeoff between producers’ accuracy and users’ accuracy. Accuracy was assessed against randomly sampled segments, and the final map achieved an overall count-based accuracy of 84.8% and area-weighted accuracy of 90.9%. Producers’ accuracies for the perennial crop classes ranged from 78 to 100%, and users’ accuracies ranged from 63 to 100%. This work develops methods to generate detailed and large-scale maps that accurately discriminate between many perennial crops and can be updated frequently.


Author(s):  
Alexander Miropolsky ◽  
Anath Fischer

The inspection of machined objects is one of the most important quality control tasks in the manufacturing industry. Contemporary scanning technologies have provided the impetus for the development of computational inspection methods, where the computer model of the manufactured object is reconstructed from the scan data, and then verified against its digital design model. Scan data, however, are typically very large scale (i.e., many points), unorganized, noisy, and incomplete. Therefore, reconstruction is problematic. To overcome the above problems the reconstruction methods may exploit diverse feature data, that is, diverse information about the properties of the scanned object. Based on this concept, the paper proposes a new method for denoising and reduction in scan data by extended geometric filter. The proposed method is applied directly on the scanned points and is automatic, fast, and straightforward to implement. The paper demonstrates the integration of the proposed method into the framework of the computational inspection process.


2021 ◽  
Author(s):  
Béla Kovács ◽  
Márton Pál ◽  
Fanni Vörös

&lt;p&gt;The use of aerial photography in topography has started in the first decades of the 20&lt;sup&gt;th&lt;/sup&gt; century. Remote sensed data have become indispensable for cartographers and GIS staff when doing large-scale mapping: especially topographic, orienteering and thematic maps. The use of UAVs (unmanned aerial vehicles) for this purpose has also become widespread for some years. Various drones and sensors (RGB, multispectral and hyperspectral) with many specifications are used to capture and process the physical properties of an examined area. In parallel with the development of the hardware, new software solutions are emerging to visualize and analyse photogrammetric material: a large set of algorithms with different approaches are available for image processing.&lt;/p&gt;&lt;p&gt;Our study focuses on the large-scale topographic mapping of vegetation and land cover. Most traditional analogue and digital maps use these layers either for background or highlighted thematic purposes. We propose to use the theory of OBIA &amp;#8211; Object-based Image Analysis to differentiate cover types. This method involves pixels to be grouped into larger polygon units based on either spectral or other variables (e.g. elevation, aspect, curvature in case of DEMs). The neighbours of initial seed points are examined whether they should be added to the region according to the similarity of their attributes. Using OBIA, different land cover types (trees, grass, soils, bare rock surfaces) can be distinguished either with supervised or unsupervised classification &amp;#8211; depending on the purposes of the analyst. Our base data were high-resolution RGB and multispectral images (with 5 bands).&lt;/p&gt;&lt;p&gt;Following this methodology, not only elevation data (e.g. shaded relief or vector contour lines) can be derived from UAV imagery but vector land cover data are available for cartographers and GIS analysts. As the number of distinct land cover groups is free to choose, even quite complex thematic layers can be produced. These layers can serve as subjects of further analyses or for cartographic visualization.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;BK is supported by Application Domain Specific Highly Reliable IT Solutions&amp;#8221; project &amp;#160;has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.&lt;/p&gt;&lt;p&gt;MP and FV are supported by EFOP-3.6.3-VEKOP-16-2017-00001: Talent Management in Autonomous Vehicle Control Technologies &amp;#8211; The Project is financed by the Hungarian Government and co-financed by the European Social Fund.&lt;/p&gt;


2018 ◽  
Vol 20 (suppl_6) ◽  
pp. vi184-vi184
Author(s):  
Tsukasa Koike ◽  
Taichi Kin ◽  
Taketo Shiode ◽  
Shunsaku Takayanagi ◽  
Shota Tanaka ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document