Real-World and Mixed-Reality Applications of Trustworthiness and Trust in Autonomous Cyber-Physical-Human Systems

2021 ◽  
Author(s):  
B. D. Allen ◽  
Natalia Alexandrov
2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.


2006 ◽  
Vol 5 (3) ◽  
pp. 53-58 ◽  
Author(s):  
Roger K. C. Tan ◽  
Adrian David Cheok ◽  
James K. S. Teh

For better or worse, technological advancement has changed the world to the extent that at a professional level demands from the working executive required more hours either in the office or on business trips, on a social level the population (especially the younger generation) are glued to the computer either playing video games or surfing the internet. Traditional leisure activities, especially interaction with pets have been neglected or forgotten. This paper introduces Metazoa Ludens, a new computer mediated gaming system which allows pets to play new mixed reality computer games with humans via custom built technologies and applications. During the game-play the real pet chases after a physical movable bait in the real world within a predefined area; infra-red camera tracks the pets' movements and translates them into the virtual world of the system, corresponding them to the movement of a virtual pet avatar running after a virtual human avatar. The human player plays the game by controlling the human avatar's movements in the virtual world, this in turn relates to the movements of the physical movable bait in the real world which moves as the human avatar does. This unique way of playing computer game would give rise to a whole new way of mixed reality interaction between the pet owner and her pet thereby bringing technology and its influence on leisure and social activities to the next level


2021 ◽  
Author(s):  
Lohit Petikam

<p>Art direction is crucial for films and games to maintain a cohesive visual style. This involves carefully controlling visual elements like lighting and colour to unify the director's vision of a story. With today's computer graphics (CG) technology 3D animated films and games have become increasingly photorealistic. Unfortunately, art direction using CG tools remains laborious. Since realistic lighting can go against artistic intentions, art direction is almost impossible to preserve in real-time and interactive applications. New live applications like augmented and mixed reality (AR and MR) now demand automatically art-directed compositing in unpredictably changing real-world lighting. </p> <p>This thesis addresses the problem of dynamically art-directed 3D composition into real scenes. Realism is a basic component of art direction, so we begin by optimising scene geometry capture in realistic composites. We find low perceptual thresholds to retain perceived seamlessness with respect to optimised real-scene fidelity. We then propose new techniques for automatically preserving art-directed appearance and shading for virtual 3D characters. Our methods allow artists to specify their intended appearance for different lighting conditions. Unlike with previous work, artists can direct and animate stylistic edits to automatically adapt to changing real-world environments. We achieve this with a new framework for look development and art direction using a novel latent space of varied lighting conditions. For more dynamic stylised lighting, we also propose a new framework for art-directing stylised shadows using novel parametric shadow editing primitives. This is a first approach that preserves art direction and stylisation under varied lighting in AR/MR.</p>


2009 ◽  
Vol 108 (2) ◽  
pp. 623-630 ◽  
Author(s):  
Igor Dolgov ◽  
David A. Birchfield ◽  
Michael K. McBeath ◽  
Harvey Thornburg ◽  
Christopher G. Todd

Perception of floor-projected moving geometric shapes was examined in the context of the Situated Multimedia Arts Learning Laboratory (SMALLab), an immersive, mixed-reality learning environment. As predicted, the projected destinations of shapes which retreated in depth (proximal origin) were judged significantly less accurately than those that approached (distal origin). Participants maintained similar magnitudes of error throughout the session, and no effect of practice was observed. Shape perception in an immersive multimedia environment is comparable to the real world. One may conclude that systematic exploration of basic psychological phenomena in novel mediated environments is integral to an understanding of human behavior in novel human-computer interaction architectures.


2020 ◽  
Vol 3 (1) ◽  
pp. 9-10
Author(s):  
Rehan Ahmed Khan

In the field of surgery, major changes that have occurred include the advent of minimally invasive surgery and the realization of the importance of the ‘systems’ in the surgical care of the patient (Pierorazio & Allaf, 2009). Challenges in surgical training are two-fold: (i) to train the surgical residents to manage a patient clinically (ii) to train them in operative skills (Singh & Darzi,2013). In Pakistan, another issue with surgical training is that we have the shortest duration of surgical training in general surgery of four years only, compared to six to eight years in Europe and America (Zafar & Rana, 2013). Along with it, the smaller number of patients to surgical residents’ ratio is also an issue in surgical training. This warrants formal training outside the operation room. It has been reported by many authors that changes are required in the current surgical training system due to the significant deficiencies in the graduating surgeon (Carlsen et al., 2014; Jarman et al., 2009; Parsons, Blencowe, Hollowood, & Grant, 2011). Considering surgical training, it is imperative that a surgeon is competent in clinical management and operative skills at the end of the surgical training. To achieve this outcome in this challenging scenario, a resident surgeon should be provided with the opportunities of training outside the operation theatre, before s/he can perform procedures on a real patient. The need for this training was felt more when the Institute of Medicine in the USA published a report, ‘To Err is Human’ (Stelfox, Palmisani, Scurlock, Orav, & Bates, 2006), with an aim to reduce medical errors. This is required for better training and objective assessment of the surgical residents. The options for this training include but are not limited to the use of mannequins, virtual patients, virtual simulators, virtual reality, augmented reality, and mixed reality. Simulation is a technique to substitute or add to real experiences with guided ones, often immersive in nature, that reproduce substantial aspects of the real world in a fully interactive way. Mannequins, virtual simulators are in use for a long time now. They are available in low fidelity to high fidelity mannequins and virtual simulators and help residents understand the surgical anatomy, operative site and practice their skills. Virtual patients can be discussed with students in a simple format of the text, pictures, and videos as case files available online, or in the form of customized software applications based on algorithms. In a study done by Courtielle et al, they reported that knowledge retention is increased in residents when it is delivered through virtual patients as compared to lecturing (Courteille et al., 2018).But learning the skills component requires hands-on practice. This gap can be bridged with virtual, augmented, or mixed reality. There are three types of virtual reality (VR) technologies: (i) non-immersive, (ii) semi-immersive, and (iii) fully immersive. Non-immersive (VR) involves the use of software and computers. In semi-immersive and immersive VR, the virtual image is presented through the head-mounted display(HMD), the difference being that in the fully immersive type, the virtual image is completely obscured from the actual world. Using handheld devices with haptic feedback the trainee can perform a procedure in the virtual environment (Douglas, Wilke, Gibson, Petricoin, & Liotta, 2017). Augmented reality (AR) can be divided into complete AR or mixed reality (MR). Through AR and MR, a trainee can see a virtual and a real-world image at the same time, making it easy for the supervisor to explain the steps of the surgery. Similar to VR, in AR and MR the user wears an HMD that shows both images. In AR, the virtual image is transparent whereas, in MR, it appears solid (Douglas et al., 2017). Virtual augmented and mixed reality has more potential to train surgeons as they provide fidelity very close to the real situation and require fewer physical resources and space compared to the simulators. But they are costlier, and affordability is an issue. To overcome this, low-cost solutions to virtual reality have been developed. It is high time that we also start thinking on the same lines and develop this means of training our surgeons at an affordable cost.


2021 ◽  
Author(s):  
◽  
Regan Petrie

<p>Early, intense practice of functional, repetitive rehabilitation interventions has shown positive results towards lower-limb recovery for stroke patients. However, long-term engagement in daily physical activity is necessary to maximise the physical and cognitive benefits of rehabilitation. The mundane, repetitive nature of traditional physiotherapy interventions and other personal, environmental and physical elements create barriers to participation. It is well documented that stroke patients engage in as little as 30% of their rehabilitation therapies. Digital gamified systems have shown positive results towards addressing these barriers of engagement in rehabilitation, but there is a lack of low-cost commercially available systems that are designed and personalised for home use. At the same time, emerging mixed reality technologies offer the ability to seamlessly integrate digital objects into the real world, generating an immersive, unique virtual world that leverages the physicality of the real world for a personalised, engaging experience.  This thesis explored how the design of an augmented reality exergame can facilitate engagement in independent lower-limb stroke rehabilitation. Our system converted prescribed exercises into active gameplay using commercially available augmented reality mobile technology. Such a system introduced an engaging, interactive alternative to existing mundane physiotherapy exercises.  The development of the system was based on a user-centered iterative design process. The involvement of health care professionals and stroke patients throughout each stage of the design and development process helped understand users’ needs, requirements and environment to refine the system and ensure its validity as a substitute for traditional rehabilitation interventions.  The final output was an augmented reality exergame that progressively facilitates sit-to-stand exercises by offering immersive interactions with digital exotic wildlife. We hypothesize that the immersive, active nature of a mobile, mixed reality exergame will increase engagement in independent task training for lower-limb rehabilitation.</p>


2021 ◽  
Vol 2 ◽  
Author(s):  
Holly C. Gagnon ◽  
Yu Zhao ◽  
Matthew Richardson ◽  
Grant D. Pointon ◽  
Jeanine K. Stefanucci ◽  
...  

Measures of perceived affordances—judgments of action capabilities—are an objective way to assess whether users perceive mediated environments similarly to the real world. Previous studies suggest that judgments of stepping over a virtual gap using augmented reality (AR) are underestimated relative to judgments of real-world gaps, which are generally overestimated. Across three experiments, we investigated whether two factors associated with AR devices contributed to the observed underestimation: weight and field of view (FOV). In the first experiment, observers judged whether they could step over virtual gaps while wearing the HoloLens (virtual gaps) or not (real-world gaps). The second experiment tested whether weight contributes to underestimation of perceived affordances by having participants wear the HoloLens during judgments of both virtual and real gaps. We replicated the effect of underestimation of step capabilities in AR as compared to the real world in both Experiments 1 and 2. The third experiment tested whether FOV influenced judgments by simulating a narrow (similar to the HoloLens) FOV in virtual reality (VR). Judgments made with a reduced FOV were compared to judgments made with the wider FOV of the HTC Vive Pro. The results showed relative underestimation of judgments of stepping over gaps in narrow vs. wide FOV VR. Taken together, the results suggest that there is little influence of weight of the HoloLens on perceived affordances for stepping, but that the reduced FOV of the HoloLens may contribute to the underestimation of stepping affordances observed in AR.


Author(s):  
Robert E. Wendrich

Design and engineering in real-world projects is often influenced by reduction of the problem definition, trade-offs during decision-making, possible loss of information and monetary issues like budget constraints or value-for-money problems. In many engineering projects various stakeholders take part in the project process on various levels of communication, engineering and decision-making. During project meetings and VE sessions between the different stakeholder’s, information and data is gathered and put down analogue and/or digitally, consequently stored in reports, minutes and other modes of representation. Results and conclusions derived from these interactions are often influenced by the user’s field of experience and expertise. Personal stakes, idiosyncrasy, expectations, preferences and interpretations of the various project parts could have implications, interfere or procrastinate non-functionality and possible rupture in the collaborative setting and process leading to diminished prospective project targets, requirements and solutions. We present a hybrid tool as a Virtual Assistant (VA) during a collaborative Value Engineering (VE) session in a real-world design and engineering case. The tool supports interaction and decision-making in conjunction with a physical workbench as focal point (-s), user-interfaces that intuit the user during processing. The hybrid environment allows the users to interact un-tethered with real-world materials, images, drawings, objects and drawing instruments. In course of the processing captures are made of the various topics or issues at stake and logged as iterative instances in a database. Real-time visualization on a monitor of the captured instances are shown and progressively listed in the on-screen user interface. During or after the session the stakeholders can go through the iterative time-listing and synthesize the instances according to i.e. topic, dominance, choice or to the degree of priority. After structuring and sorting the data sets the information can be exported to a data or video file. All stakeholders receive or have access to the data files and can track-back the complete process progression. The system and information generated affords reflection, knowledge sharing and cooperation. Redistribution of data sets to other stakeholders, management or third parties becomes more efficient and congruous. Our approach we took during this experiment was to [re]search the communication, interaction and decision-making progressions of the various stakeholders during the VE-session. We observed the behavioral aspects during the various stages of user interaction, following the decision making process and the use of the tool during the course of the session. We captured the complete session on video for analysis and evaluation of the VE process within a hybrid design environment.


Sign in / Sign up

Export Citation Format

Share Document