scholarly journals Detection of Removed Objects in 3D Meshes Using Up-to-Date Images for Mixed-Reality Applications

Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 377
Author(s):  
Olivier Roupin ◽  
Matthieu Fradet ◽  
Caroline Baillard ◽  
Guillaume Moreau

Precise knowledge of the real environment is a prerequisite for the integration of the real and virtual worlds in mixed-reality applications. However, real-time updating of a real environment model is a costly and difficult process; therefore, hybrid approaches have been developed: An updated world model can be inferred from an offline acquisition of the 3D world, which is then updated online using live image sequences under the condition of developing fast and robust change detection algorithms. Current algorithms are biased toward object insertion and often fail in object removal detection; in an environment where there is uniformity in the background—in color and intensity—the disappearances of foreground objects between the 3D scan of a scene and the capture of several new pictures of said scene are difficult to detect. The novelty of our approach is that we circumvent this issue by focusing on areas of least change in parts of the scene that should be occluded by the foreground. Through experimentation on realistic datasets, we show that this approach results in better detection and localization of removed objects. This technique can be paired with an insertion detection algorithm to provide a complete change detection framework.


Author(s):  
Gulnaz Alimjan ◽  
Yiliyaer Jiaermuhamaiti ◽  
Huxidan Jumahong ◽  
Shuangling Zhu ◽  
Pazilat Nurmamat

Various UNet architecture-based image change detection algorithms promote the development of image change detection, but there are still some defects. First, under the encoder–decoder framework, the low-level features are extracted many times in multiple dimensions, which generates redundant information; second, the relationship between each feature layer is not modeled so sufficiently that it cannot produce the optimal feature differentiation representation. This paper proposes a remote image change detection algorithm based on the multi-feature self-attention fusion mechanism UNet network, abbreviated as MFSAF UNet (multi-feature self-attention fusion UNet). We attempt to add multi-feature self-attention mechanism between the encoder and decoder of UNet to obtain richer context dependence and overcome the two above-mentioned restrictions. Since the capacity of convolution-based UNet network is directly proportional to network depth, and a deeper convolutional network means more training parameters, so the convolution of each layer of UNet is replaced as a separated convolution, which makes the entire network to be lighter and the model’s execution efficiency is slightly better than the traditional convolution operation. In addition to these, another innovation point of this paper is using preference to control loss function and meet the demands for different accuracies and recall rates. The simulation test results verify the validity and robustness of this approach.



2021 ◽  
Vol 3 (1) ◽  
pp. 6-7
Author(s):  
Kathryn MacCallum

Mixed reality (MR) provides new opportunities for creative and innovative learning. MR supports the merging of real and virtual worlds to produce new environments and visualisations where physical and digital objects co-exist and interact in real-time (MacCallum & Jamieson, 2017). The MR continuum links both virtual and augmented reality, whereby virtual reality (VR) enables learners to be immersed within a completely virtual world, while augmented reality (AR) blend the real and the virtual world. MR embraces the spectrum between the real and the virtual; the mix of the virtual and real worlds may vary depending on the application. The integration of MR into education provides specific affordances which make it specifically unique in supporting learning (Parson & MacCallum, 2020; Bacca, Baldiris, Fabregat, Graf & Kinshuk, 2014). These affordance enable students to support unique opportunities to support learning and develop 21st-century learning capabilities (Schrier, 2006; Bower, Howe, McCredie, Robinson, & Grover, 2014).   In general, most integration of MR in the classroom tend to be focused on students being the consumers of these experiences. However by enabling student to create their own experiences enables a wider range of learning outcomes to be incorporated into the learning experience. By enabling student to be creators and designers of their own MR experiences provides a unique opportunity to integrate learning across the curriculum and supports the develop of computational thinking and stronger digital skills. The integration of student-created artefacts has particularly been shown to provide greater engagement and outcomes for all students (Ananiadou & Claro, 2009).   In the past, the development of student-created MR experiences has been difficult, especially due to the steep learning curve of technology adoption and the overall expense of acquiring the necessary tools to develop these experiences. The recent development of low-cost mobile and online MR tools and technologies have, however, provided new opportunities to provide a scaffolded approach to the development of student-driven artefacts that do not require significant technical ability (MacCallum & Jamieson, 2017). Due to these advances, students can now create their own MR digital experiences which can drive learning across the curriculum.   This presentation explores how teachers at two high schools in NZ have started to explore and integrate MR into their STEAM classes.  This presentation draws on the results of a Teaching and Learning Research Initiative (TLRI) project, investigating the experiences and reflections of a group of secondary teachers exploring the use and adoption of mixed reality (augmented and virtual reality) for cross-curricular teaching. The presentation will explore how these teachers have started to engage with MR to support the principles of student-created digital experiences integrated into STEAM domains.



2020 ◽  
Vol 10 (3) ◽  
pp. 1135 ◽  
Author(s):  
Mulun Wu ◽  
Shi-Lu Dai ◽  
Chenguang Yang

This paper proposes a novel control system for the path planning of an omnidirectional mobile robot based on mixed reality. Most research on mobile robots is carried out in a completely real environment or a completely virtual environment. However, a real environment containing virtual objects has important actual applications. The proposed system can control the movement of the mobile robot in the real environment, as well as the interaction between the mobile robot’s motion and virtual objects which can be added to a real environment. First, an interactive interface is presented in the mixed reality device HoloLens. The interface can display the map, path, control command, and other information related to the mobile robot, and it can add virtual objects to the real map to realize a real-time interaction between the mobile robot and the virtual objects. Then, the original path planning algorithm, vector field histogram* (VFH*), is modified in the aspects of the threshold, candidate direction selection, and cost function, to make it more suitable for the scene with virtual objects, reduce the number of calculations required, and improve the security. Experimental results demonstrated that this proposed method can generate the motion path of the mobile robot according to the specific requirements of the operator, and achieve a good obstacle avoidance performance.



Lex Russica ◽  
2020 ◽  
pp. 86-96
Author(s):  
E. E. Bogdanova

In the paper, the author notes that the development of modern technologies, including artificial intelligence, unmanned transport, robotics, portable and embedded digital devices, already has a great impact on the daily life of a person and can fundamentally change the existing social order in the near future.Virtual reality as a technology was born in the cross-section of research in the field of three-dimensional computer graphics and human-machine interaction. The spectrum of mixed reality includes the real world itself, the one that is before our eyes, the world of augmented reality — an improved reality that results from the introduction of sensory data into the field of perception in order to supplement information about the surrounding world and improve the perception of information; the world of virtual reality, which is created using technologies that provide full immersion in the environment. In some studies, augmented virtuality is also included in the spectrum, which implies the addition of virtual reality with elements of the real world (combining the virtual and real world).The paper substantiates the conclusion that in the near future both the legislator and judicial practice will have to find a balance between the interests of the creators of virtual worlds and virtual artists exclusive control over their virtual works, on the one hand, and society in using these virtual works and their development, on the other hand. It is necessary to allow users to participate, interact and create new forms of creative expression in the virtual environment.The author concludes that a broader interpretation of the fair use doctrine should be applied in this area, especially for those virtual worlds and virtual objects that imitate the real world and reality. However, it is necessary to distinguish between cases where the protection of such objects justifies licensing and those where it is advisable to encourage unrestricted use of the results for the further development of new technologies. 



Author(s):  
Rimjhim Padam Singh ◽  
Poonam Sharma

Background subtraction is a prerequisite and often the very first step employed in several high-level and real-time computer vision applications. Several parametric and non-parametric change detection algorithms employing multiple feature spaces have been proposed to date but none has proven to be robust against all challenges that can possibly be posed in a complex real-time environment. Amongst the varied challenges posed, illumination variations, shadows, dynamic backgrounds, camouflaged and bootstrapping artifacts are some of the well-known problems. This paper presents a light-weight hybrid change detection algorithm that integrates a novel combination of RGB color space and conditional YCbCr-based XCS-LBP texture descriptors (YXCS-LBP) into a modified pixel-based background model. The conditional employment of light-weight YXCS-LBP texture features with the modified Visual background extractor (ViBe) aiming at reduction in false positives, produces outperforming results without incurring much memory and computational cost. The random and time-subsampled update strategy employed with the proposed classification procedure ensures the efficient suppression of shadows and bootstrapping artifacts along with the complete retention of long-term static objects in the foreground masks. Comprehensive performance analysis of the proposed technique on publicly available Change Detection dataset (2014 CDnet dataset) demonstrates the superiority of the proposed technique over different state-of-the-art-methods against varied challenges.



Proceedings ◽  
2019 ◽  
Vol 31 (1) ◽  
pp. 83
Author(s):  
Lasala ◽  
Jara ◽  
Alamán

During the last decade some technologies have achieved the necessary maturity to allow the widespread emergence of Virtual Worlds; many people are living an alternative life in them. One objective of this paper is to argue that the blending of real and virtual worlds has been happening for centuries, and in fact is the mark of “civilization”. This project presents a proposal to improve student motivation in the classroom, through a new form of recreation of a mixed reality environment. To this end, two applications have been created that work together between the real environment and the virtual environment: these applications are called “Virtual Craft” and “Virtual Touch”. Virtual Craft is related with the real world and Virtual Touch is related with the virtual world. These applications are in constant communication with each other, since both students and teachers carry out actions that influence the real or virtual world. A gamification mechanics was used in the recreated environment, in order to motivate the students to carry out the activities assigned by the teacher. For the evaluation of the proposal, a pilot experiment with Virtual Craft was carried out in a Secondary Educational Center in Valls (Spain).



Author(s):  
Stefano Di Tore ◽  
Paola Aiello ◽  
Pio Alfredo Di Tore ◽  
Maurizio Sibilio

Up to which point can people consider as part of their body the Pong racket, or an avatar on the screen, on which do people exert direct motor control as well? When individuals move in a virtual environment, do the proprioceptors convey information about the location of which body? In which environment? How will the information contaminate each other? How does the temperature felt on the real environment influence the interaction in the virtual environment? This paper is not intended to answer these questions, it is rather intended to raise fundamental questions of perception and phenomenology in a digital context in which bodies “are not born; they are made” (Haraway, 1991). The work should act as a positio quaestionis, with the aim of affirming the urgent need for a necessarily interdisciplinary reflection on the overall design of the body - perception - cognition - technology perimeter; it also identifies in the Berthoz simplexity and Ginzburg evidential paradigms, and in the Hansen concept of mixed reality, the building blocks of a theoretical framework aimed to the solution of these questions.



2021 ◽  
pp. 1-18
Author(s):  
Maxim Igorevich Sorokin ◽  
Dmitry Dmitrievich Zhdanov ◽  
Ildar Vagizovich Valiev

The paper examines the causes of visual discomfort in mixed reality systems and algorithmic solutions that eliminate one of the main causes of discomfort, namely, the mismatch between the lighting conditions of objects in the real and virtual worlds. To eliminate this cause of discomfort, the algorithm is proposed, which consists in constructing groups of shadow rays from points on the boundaries of shadows to points on the boundaries of objects. Part of the rays corresponding to the real lighting conditions form caustics in area of the real light source, which makes it possible to determine the source of illumination of virtual objects for their correct embedding into the mixed reality system. Convolutional neural networks and computer vision algorithms were used to classify shadows in the image. Examples of reconstructing the coordinates of a light source from RGBD data are presented.



2019 ◽  
Vol 2019 (1) ◽  
pp. 237-242
Author(s):  
Siyuan Chen ◽  
Minchen Wei

Color appearance models have been extensively studied for characterizing and predicting the perceived color appearance of physical color stimuli under different viewing conditions. These stimuli are either surface colors reflecting illumination or self-luminous emitting radiations. With the rapid development of augmented reality (AR) and mixed reality (MR), it is critically important to understand how the color appearance of the objects that are produced by AR and MR are perceived, especially when these objects are overlaid on the real world. In this study, nine lighting conditions, with different correlated color temperature (CCT) levels and light levels, were created in a real-world environment. Under each lighting condition, human observers adjusted the color appearance of a virtual stimulus, which was overlaid on a real-world luminous environment, until it appeared the whitest. It was found that the CCT and light level of the real-world environment significantly affected the color appearance of the white stimulus, especially when the light level was high. Moreover, a lower degree of chromatic adaptation was found for viewing the virtual stimulus that was overlaid on the real world.



Sign in / Sign up

Export Citation Format

Share Document