scholarly journals EVM: An Educational Virtual Reality Modeling Tool; Evaluation Study with Freshman Engineering Students

2021 ◽  
Vol 12 (1) ◽  
pp. 390
Author(s):  
Julián Conesa-Pastor ◽  
Manuel Contero

Educational Virtual Modeling (EVM) is a novel VR-based application for sketching and modeling in an immersive environment designed to introduce freshman engineering students to modeling concepts and reinforce their understanding of the spatial connection between an object and its 2D projections. It was built on the Unity 3D game engine and Microsoft’s Mixed Reality Toolkit (MRTK). EVM was designed to support the creation of the typical parts used in exercises in basic engineering graphics courses with a special emphasis on a fast learning curve and a simple way to provide exercises and tutorials to students. To analyze the feasibility of using EVM for this purpose, a user study was conducted with 23 freshmen and sophomore engineering students that used both EVM and Trimble SketchUp to model six parts using an axonometric view as the input. Students had no previous experience in any of the two systems. Each participant went through a brief training session and was allowed to use each tool freely for 20 min. At the end of the modeling exercises with each system, the participants rated its usability by answering the System Usability Scale (SUS) questionnaire. Additionally, they filled out a questionnaire for assessment of the system functionality. The results demonstrated a very high SUS score for EVM (M = 92.93, SD = 6.15), whereas Trimble SketchUp obtained only a mean score of 76.30 (SD = 6.69). The completion time for the modeling tasks with EVM showed its suitability for regular class use, despite the fact that it usually takes longer to complete the exercises in the system than in Trimble SketchUp. There were no statistically significant differences regarding functionality assessment. At the end of the experimental session, participants were asked to express their opinion about the systems and provide suggestions for the improvement of EVM. All participants showed a preference for EVM as a potential tool to perform exercises in the engineering graphics course.

2021 ◽  
Vol 2 ◽  
Author(s):  
Gonzalo Suárez ◽  
Sungchul Jung ◽  
Robert W. Lindeman

This article reports on a study to evaluate the effectiveness of virtual human (VH) role-players as leadership training tools within two computer-generated environments, virtual reality (VR) and mixed reality (MR), compared to a traditional training method, real human (RH) role-players in a real-world (RW) environment. We developed an experimental training platform to assess the three conditions: RH role-players in RW (RH-RW), VH role-players in VR (VH-VR), and VH role-players in MR (VH-MR), during two practice-type opportunities, namely pre-session and post-session. We conducted a user study where 30 participants played the role of leaders in interacting with either RHs or VHs before and after receiving a leadership training session. We then investigated (1) if VH role-players were as effective as RH role-players during pre- and post-sessions, and (2) the impact that the human-type (RH, VH) in conjunction with the environment-type (RW, VR, MR) had on the outcomes. We also collected user reactions and learning data from the overall training experience. The results showed a regular increase in performance from pre- to post-sessions in all three conditions. However, we did not find a significant difference between VHs and RHs. Interestingly, the VH-MR condition had a more significant influence on performance and task engagement compared to the VH-VR and RH-RW conditions. Based on our findings, we conclude that VH role-players can be as effective as RH role-players to support the practice of leadership skills, where VH-MR could be the best method due to its effectiveness.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


2019 ◽  
Vol 10 (04) ◽  
pp. 655-669
Author(s):  
Gaurav Trivedi ◽  
Esmaeel R. Dadashzadeh ◽  
Robert M. Handzel ◽  
Wendy W. Chapman ◽  
Shyam Visweswaran ◽  
...  

Abstract Background Despite advances in natural language processing (NLP), extracting information from clinical text is expensive. Interactive tools that are capable of easing the construction, review, and revision of NLP models can reduce this cost and improve the utility of clinical reports for clinical and secondary use. Objectives We present the design and implementation of an interactive NLP tool for identifying incidental findings in radiology reports, along with a user study evaluating the performance and usability of the tool. Methods Expert reviewers provided gold standard annotations for 130 patient encounters (694 reports) at sentence, section, and report levels. We performed a user study with 15 physicians to evaluate the accuracy and usability of our tool. Participants reviewed encounters split into intervention (with predictions) and control conditions (no predictions). We measured changes in model performance, the time spent, and the number of user actions needed. The System Usability Scale (SUS) and an open-ended questionnaire were used to assess usability. Results Starting from bootstrapped models trained on 6 patient encounters, we observed an average increase in F1 score from 0.31 to 0.75 for reports, from 0.32 to 0.68 for sections, and from 0.22 to 0.60 for sentences on a held-out test data set, over an hour-long study session. We found that tool helped significantly reduce the time spent in reviewing encounters (134.30 vs. 148.44 seconds in intervention and control, respectively), while maintaining overall quality of labels as measured against the gold standard. The tool was well received by the study participants with a very good overall SUS score of 78.67. Conclusion The user study demonstrated successful use of the tool by physicians for identifying incidental findings. These results support the viability of adopting interactive NLP tools in clinical care settings for a wider range of clinical applications.


Author(s):  
Steve Beitzel ◽  
Josiah Dykstra ◽  
Paul Toliver ◽  
Jason Youzwak

We investigate the feasibility of using Microsoft HoloLens, a mixed reality device, to visually analyze network capture data and locate anomalies. We developed MINER, a prototype application to visualize details from network packet captures as 3D stereogram charts. MINER employs a novel approach to time-series visualization that extends the time dimension across two axes, thereby taking advantage of the immersive 3D space available via the HoloLens. Users navigate the application through eye gaze and hand gestures to view summary and detailed bar graphs. Callouts display additional detail based on the user’s immediate gaze. In a user study, volunteers used MINER to locate network attacks in a dataset from the 2013 VAST Challenge. We compared the time and effort with a similar test using traditional tools on a desktop computer. Our findings suggest that network anomaly analysis with the HoloLens achieved comparable effectiveness, efficiency and satisfaction. We describe user metrics and feedback collected from these experiments; lessons learned and suggested future work.


10.29007/7jch ◽  
2019 ◽  
Author(s):  
James Stigall ◽  
Sharad Sharma

Building occupants must know how to properly exit a building should the need ever arise. Being aware of appropriate evacuation procedures eliminates (or reduces) the risk of injury and death occurring during an existing catastrophe. Augmented reality (AR) is increasingly being sought after as a teaching and training tool because it offers a visualization and interaction capability that captures the learner’s attention and enhances the learner’s capacity to retain what was learned. Utilizing the visualization and interaction capability that AR offers and the need for emergency evacuation training, this paper explores mobile AR application (MARA) constructed to help users evacuate a building in the event of an emergency such as a building fire, active shooter, earthquake, and similar circumstances. The MARA was built for Android-based devices using Unity and Vuforia. Its features include the use of intelligent signs (i.e. visual cues to guide users to the exits) to help users evacuate a building. Inter alia, this paper discusses the MARA’s implementation and its evaluation through a user study utilizing the Technology Acceptance Model (TAM) and the System Usability Scale (SUS) frameworks. The results demonstrate the participants’ opinions that the MARA is both usable and effective in helping users evacuate a building.


2021 ◽  
Author(s):  
Hye Jin Kim

<p><b>Telepresence systems enable people to feel present in a remote space while their bodies remain in their local space. To enhance telepresence, the remote environment needs to be captured and visualised in an immersive way. For instance, 360-degree videos (360-videos) shown on head-mounted displays (HMDs) provide high fidelity telepresence in a remote place. Mixed reality (MR) in 360-videos enables interactions with virtual objects blended in the captured remote environment while it allows telepresence only for a single user wearing HMD. For this reason, it has limitations when multiple users want to experience telepresence together and naturally collaborate within a teleported space. </b></p><p>This thesis presents TeleGate, a novel multi-user teleportation platform for remote collaboration in a MR space. TeleGate provides "semi-teleportation" into the MR space using large-scale displays, acting as a bridge between the local physical communication space and the remote collaboration space created by MR with captured 360-videos. Our proposed platform enables multi-user semi-teleportation to perform collaborative tasks in the remote MR collaboration (MRC) space while allowing for natural communication between collaborators in the same local physical space. </p><p>We implemented a working prototype of TeleGate and then conducted a user study to evaluate our concept of semi-teleportation. We measured the spatial presence, social presence while participants performed remote collaborative tasks in the MRC space. Additionally, we also explored the different control mechanisms within the platform in the remote MR collaboration scenario. </p><p>In conclusion, TeleGate enabled multiple co-located users to semi-teleport together using large-scale displays for remote collaboration in MR 360-videos.</p>


2020 ◽  
Vol 190 (1) ◽  
pp. 58-65
Author(s):  
Yi Guo ◽  
Li Mao ◽  
Gongsen Zhang ◽  
Zhi Chen ◽  
Xi Pei ◽  
...  

Abstract To help minimise occupational radiation exposure in interventional radiology, we conceptualised a virtual reality-based radiation safety training system to help operators understand complex radiation fields and to avoid high radiation areas through game-like interactive simulations. The preliminary development of the system has yielded results suggesting that the training system can calculate and report the radiation exposure after each training session based on a database precalculated from computational phantoms and Monte Carlo simulations and the position information provided by the Microsoft HoloLens headset. In addition, real-time dose rate and cumulative dose will be displayed to the trainee to help them adjust their practice. This paper presents the conceptual design of the overall hardware and software design, as well as preliminary results to combine HoloLens headset and complex 3D X-ray field spatial distribution data to create a mixed reality environment for safety training purpose in interventional radiology.


Author(s):  
Anders Henrysson ◽  
Mark Ollila ◽  
Mark Billinghurst

Mobile phones are evolving into the ideal platform for Augmented Reality (AR). In this chapter we describe how augmented reality applications can be developed for mobile phones and the interaction metaphors that are ideally suited for this platform. Several sample applications are described which explore different interaction techniques. User study results show that moving the phone to interact with virtual content is an intuitive way to select and position virtual objects. A collaborative AR game is also presented with an evaluation study. Users preferred playing with the collaborative AR interface than with a non-AR interface and also found physical phone motion to be a very natural input method. This results discussed in this chapter should assist researchers in developing their own mobile phone based AR applications.


2017 ◽  
Vol 25 (1) ◽  
pp. 81-87 ◽  
Author(s):  
Gaurav Trivedi ◽  
Phuong Pham ◽  
Wendy W Chapman ◽  
Rebecca Hwa ◽  
Janyce Wiebe ◽  
...  

Abstract The gap between domain experts and natural language processing expertise is a barrier to extracting understanding from clinical text. We describe a prototype tool for interactive review and revision of natural language processing models of binary concepts extracted from clinical notes. We evaluated our prototype in a user study involving 9 physicians, who used our tool to build and revise models for 2 colonoscopy quality variables. We report changes in performance relative to the quantity of feedback. Using initial training sets as small as 10 documents, expert review led to final F1scores for the “appendiceal-orifice” variable between 0.78 and 0.91 (with improvements ranging from 13.26% to 29.90%). F1for “biopsy” ranged between 0.88 and 0.94 (−1.52% to 11.74% improvements). The average System Usability Scale score was 70.56. Subjective feedback also suggests possible design improvements.


Sign in / Sign up

Export Citation Format

Share Document