scholarly journals Virtual Object Position Manipulation Using ARToolKit

2015 ◽  
Vol 1 (2) ◽  
pp. 306
Author(s):  
Hoger Mahmud Hussen

In this paper the outcome of a project is presented that aims to modify and improve one of the most widely used Augmented Reality tools. Augmented reality (AR), is a fast growing area of virtual reality research. Augmented Reality (AR) is a newly emerging technology by which user’s view of the real world is augmented with additional information from a computer model. ARToolKit is one of the most widely used toolkits for Augmented Reality applications. The toolkit tracks optical markers and overlays virtual objects on the markers. In the current version of the toolkit the overlaid object is stationary or loops regardless of the optical target position, this means that the overlaid object cannot be animated or changed based on the movement of the optical target. The aim is to improve the toolkit, therefore a design solution to modify it were designed and implement so that users can manipulate the position of the overlaid virtual object, through movements of the optical target. The design solution focuses on developing a mathematically based links between the position of the optical target and the overlaid virtual object. To test the solution test cases were developed and the results show that the design solution is effective and the principal idea can be used to develop many applications in different sectors such as education and health.

Author(s):  
Kevin Lesniak ◽  
Conrad S. Tucker

The method presented in this work reduces the frequency of virtual objects incorrectly occluding real-world objects in Augmented Reality (AR) applications. Current AR rendering methods cannot properly represent occlusion between real and virtual objects because the objects are not represented in a common coordinate system. These occlusion errors can lead users to have an incorrect perception of the environment around them when using an AR application, namely not knowing a real-world object is present due to a virtual object incorrectly occluding it and incorrect perception of depth or distance by the user due to incorrect occlusions. The authors of this paper present a method that brings both real-world and virtual objects into a common coordinate system so that distant virtual objects do not obscure nearby real-world objects in an AR application. This method captures and processes RGB-D data in real-time, allowing the method to be used in a variety of environments and scenarios. A case study shows the effectiveness and usability of the proposed method to correctly occlude real-world and virtual objects and provide a more realistic representation of the combined real and virtual environments in an AR application. The results of the case study show that the proposed method can detect at least 20 real-world objects with potential to be incorrectly occluded while processing and fixing occlusion errors at least 5 times per second.


2019 ◽  
Vol 9 (14) ◽  
pp. 2933 ◽  
Author(s):  
Ju Young Oh ◽  
Ji Hyung Park ◽  
Jung-Min Park

This paper proposes an interaction method to conveniently manipulate a virtual object by combining touch interaction and head movements for a head-mounted display (HMD), which provides mobile augmented reality (AR). A user can conveniently manipulate a virtual object with touch interaction recognized from the inertial measurement unit (IMU) attached to the index finger’s nail and head movements tracked by the IMU embedded in the HMD. We design two interactions that combine touch and head movements, to manipulate a virtual object on a mobile HMD. Each designed interaction method manipulates virtual objects by controlling ray casting and adjusting widgets. To evaluate the usability of the designed interaction methods, a user evaluation is performed in comparison with the hand interaction using Hololens. As a result, the designed interaction method receives positive feedback that virtual objects can be manipulated easily in a mobile AR environment.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


2019 ◽  
Vol 9 (13) ◽  
pp. 2597 ◽  
Author(s):  
Koukeng Yang ◽  
Thomas Brown ◽  
Kelvin Sung

Recently released, depth-sensing-capable, and moderately priced handheld devices support the implementation of augmented reality (AR) applications without the requirement of tracking visually distinct markers. This relaxed constraint allows for applications with significantly increased augmentation space dimension, virtual object size, and user movement freedom. Being relatively new, there is currently a lack of study on issues concerning direct virtual object manipulation for AR applications on these devices. This paper presents the results from a survey of the existing object manipulation methods designed for traditional handheld devices and identifies potentially viable ones for newer, depth-sensing-capable devices. The paper then describes the following: a test suite that implements the identified methods, test cases designed specifically for the characteristics offered by the new devices, the user testing process, and the corresponding results. Based on the study, this paper concludes that AR applications on newer, depth-sensing-capable handheld devices should manipulate small-scale virtual objects by mapping directly to device movements and large-scale virtual objects by supporting separate translation and rotation modes. Our work and results are the first step in better understanding the requirements to support direct virtual object manipulation for AR applications running on a new generation of depth-sensing-capable handheld devices.


2020 ◽  
Vol 8 (5) ◽  
pp. 4149-4155

Recently, augmented Reality (AR) is growing rapidly and much attention has been focused on interaction techniques between users and virtual objects, such as the user directly manipulating virtual objects with his/her bare hands. Therefore, the authors believe that more accurate overlay techniques will be required to interact more seamlessly. On the other hand, in AR technology, since the 3-dimensional (3D) model is superimposed on the image of the real space afterwards, it is always displayed on the front side than the hand. Thus, it becomes an unnatural scene in some cases (occlusion problem). In this study, this system considers the object-context relations between the user's hand and the virtual object by acquiring depth information of the user's finger using a depth sensor. In addition, the system defines the color range of the user's hand by performing principal component analysis (PCA) on the color information near the finger position obtained from the depth sensor and setting a threshold. Then, this system extracts an area of the hand by using the definition of the color range of the user's hand. Furthermore, the fingers are distinguished by using the Canny method. In this way, this system realizes hidden surface removal along the area of the user's hand. In the evaluation experiment, it is confirmed that the hidden surface removal in this study make it possible to distinguish between finger boundaries and to clarify and process finger contours.


Author(s):  
Arpita M Hegde

The Real environment is supplemented or augmented with the computer-generated virtual objects or image is known as Augmented Reality. Augmented Reality adds thing to the existing world. It is an enhancement of real world where we mix the real world with the virtual objects. In this paper we are implementing a methodology that builds preview of the interior designs of the room which contains the virtual object alongside the real environment. Using this application user can place the selected objects such as furniture, lamps, vase etc in their personal space. This eventually reduces the challenging task of purchasing and adjusting non suitable objects to his or her room as user gets the preview before purchasing the actual item. This application is more suitable for this busy and digitalizing world.


Author(s):  
Oleksandr Bezpalko

Unlike a purely virtual world, it is much more difficult for the user to believe in the reality of augmented reality objects. Due to the lack of proper lighting or shadows, the object may appear to be floating in the air, detached from the real objects around it. One obvious problem with augmented reality is that a virtual object appears remote from the real object, but it still appears in front of it. An approach is proposed that will allow the interaction of real and virtual objects. Both real and virtual objects can be moved and rotated in the scene, preserving overlaps. A virtual object can also be placed in front of or behind a real object relative to the camera, which decides whether or not to overlap. The proposed algorithm consists of five stages and the system architecture. The evaluation is based on five defined criteria. Results and ways of improvement for the future research are presented.


ORL ◽  
2021 ◽  
pp. 1-10
Author(s):  
Claudia Scherl ◽  
Johanna Stratemeier ◽  
Nicole Rotter ◽  
Jürgen Hesser ◽  
Stefan O. Schönberg ◽  
...  

<b><i>Introduction:</i></b> Augmented reality can improve planning and execution of surgical procedures. Head-mounted devices such as the HoloLens® (Microsoft, Redmond, WA, USA) are particularly suitable to achieve these aims because they are controlled by hand gestures and enable contactless handling in a sterile environment. <b><i>Objectives:</i></b> So far, these systems have not yet found their way into the operating room for surgery of the parotid gland. This study explored the feasibility and accuracy of augmented reality-assisted parotid surgery. <b><i>Methods:</i></b> 2D MRI holographic images were created, and 3D holograms were reconstructed from MRI DICOM files and made visible via the HoloLens. 2D MRI slices were scrolled through, 3D images were rotated, and 3D structures were shown and hidden only using hand gestures. The 3D model and the patient were aligned manually. <b><i>Results:</i></b> The use of augmented reality with the HoloLens in parotic surgery was feasible. Gestures were recognized correctly. Mean accuracy of superimposition of the holographic model and patient’s anatomy was 1.3 cm. Highly significant differences were seen in position error of registration between central and peripheral structures (<i>p</i> = 0.0059), with a least deviation of 10.9 mm (centrally) and highest deviation for the peripheral parts (19.6-mm deviation). <b><i>Conclusion:</i></b> This pilot study offers a first proof of concept of the clinical feasibility of the HoloLens for parotid tumor surgery. Workflow is not affected, but additional information is provided. The surgical performance could become safer through the navigation-like application of reality-fused 3D holograms, and it improves ergonomics without compromising sterility. Superimposition of the 3D holograms with the surgical field was possible, but further invention is necessary to improve the accuracy.


Sign in / Sign up

Export Citation Format

Share Document