scholarly journals 2 3D VIEW: Designing of a Deception from Distorted View-dependent Images and Explaining interaction with virtual World

2018 ◽  
Vol 1 (1) ◽  
pp. 11
Author(s):  
Syed Muhammad Ali ◽  
Zeeshan Mahmood ◽  
Dr. Tahir Qadri

This paper presents an intuitive and interactive computer simulated augmented reality interface that gives the illusion of a 3D immersive environment. The projector displays a rendered virtual scene on a flat 2D surface (floor or table) based on the user’s viewpoint to create a head coupled perspective. The projected image is view-dependent which changes and deforms relative to user’s position in space. The nature of perspective projection is distorted and anamorphic such that the deformations in the image give an illusion of a virtual three dimensional holographic scene in which the objects are popping out or floating above the projection plane like real 3D objects. Also, the user can manipulate and interact with 3D objects in a virtual environment by controlling the position and orientation of 3D models, interacting with GUI incorporated in virtual scene and can view, move, manipulate and observe the details of objects from any angle naturally by using his hands. The head and hand tracking is achieved by a low cost 3D depth sensor ‘Kinect’. We describe the implementation of the system in OpenGL and Unity3D game engine. Stereoscopic 3D along with other enhancements are also introduced which further improves the 3D perception. The approach does not require head mounted displays or expensive 3D hologram projectors as it is based on perspective projection technique. Our experiments show the potential of the system providing users a powerful, realistic illusion of 3D.

2018 ◽  
Vol 7 (1) ◽  
pp. 11
Author(s):  
Syed Muhammad Ali ◽  
Zeeshan Mahmood ◽  
Dr. Tahir Qadri

This paper presents an intuitive and interactive computer simulated augmented reality interface that gives the illusion of a 3D immersive environment. The projector displays a rendered virtual scene on a flat 2D surface (floor or table) based on the user’s viewpoint to create a head coupled perspective. The projected image is view-dependent which changes and deforms relative to user’s position in space. The nature of perspective projection is distorted and anamorphic such that the deformations in the image give an illusion of a virtual three dimensional holographic scene in which the objects are popping out or floating above the projection plane like real 3D objects. Also, the user can manipulate and interact with 3D objects in a virtual environment by controlling the position and orientation of 3D models, interacting with GUI incorporated in virtual scene and can view, move, manipulate and observe the details of objects from any angle naturally by using his hands. The head and hand tracking is achieved by a low cost 3D depth sensor ‘Kinect’. We describe the implementation of the system in OpenGL and Unity3D game engine. Stereoscopic 3D along with other enhancements are also introduced which further improves the 3D perception. The approach does not require head mounted displays or expensive 3D hologram projectors as it is based on perspective projection technique. Our experiments show the potential of the system providing users a powerful, realistic illusion of 3D.


Author(s):  
Quentin Kevin Gautier ◽  
Thomas G. Garrison ◽  
Ferrill Rushton ◽  
Nicholas Bouck ◽  
Eric Lo ◽  
...  

PurposeDigital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.Design/methodology/approachThe authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.FindingsWhile not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.Practical implicationsThe proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.Originality/valueThis article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.


2021 ◽  
Vol 11 (12) ◽  
pp. 5321
Author(s):  
Marcin Barszcz ◽  
Jerzy Montusiewicz ◽  
Magdalena Paśnikowska-Łukaszuk ◽  
Anna Sałamacha

In the era of the global pandemic caused by the COVID-19 virus, 3D digitisation of selected museum artefacts is becoming more and more frequent practice, but the vast majority is performed by specialised teams. The paper presents the results of comparative studies of 3D digital models of the same museum artefacts from the Silk Road area generated by two completely different technologies: Structure from Motion (SfM)—a method belonging to the so-called low-cost technologies—and by Structured-light 3D Scanning (3D SLS). Moreover, procedural differences in data acquisition and their processing to generate three-dimensional models are presented. Models built using a point cloud were created from data collected in the Afrasiyab museum in Samarkand (Uzbekistan) during “The 1st Scientific Expedition of the Lublin University of Technology to Central Asia” in 2017. Photos for creating 3D models in SfM technology were taken during a virtual expedition carried out under the “3D Digital Silk Road” program in 2021. The obtained results show that the quality of the 3D models generated with SfM differs from the models from the technology (3D SLS), but they may be placed in the galleries of the vitrual museum. The obtained models from SfM do not have information about their size, which means that they are not fully suitable for archiving purposes of cultural heritage, unlike the models from SLS.


2004 ◽  
Vol 13 (6) ◽  
pp. 692-707 ◽  
Author(s):  
Sara Keren ◽  
Ilan Shimshoni ◽  
Ayellet Tal

This paper discusses the problem of inserting 3D models into a single image. The main focus of the paper is on the accurate recovery of the camera's parameters, so that 3D models can be inserted in the “correct” position and orientation. The paper addresses two issues. The first is an automatic extraction of the principal vanishing points from an image. The second is a theoretical and an experimental analysis of the errors. To test the concept, a system that “plants” virtual 3D objects in the image was implemented. It was tested on many indoor augmented-reality scenes. Our analysis and experiments have shown that errors in the placement of the objects are unnoticeable.


Author(s):  
Jinmiao Huang ◽  
Rahul Rai

We introduce an intuitive gesture-based interaction technique for creating and manipulating simple three-dimensional (3D) shapes. Specifically, the developed interface utilizes low-cost depth camera to capture user's hand gesture as the input, maps different gestures to system commands and generates 3D models from midair 3D sketches (as opposed to traditional two-dimensional (2D) sketches). Our primary contribution is in the development of an intuitive gesture-based interface that enables novice users to rapidly construct conceptual 3D models. Our development extends current works by proposing both design and technical solutions to the challenges of the gestural modeling interface for conceptual 3D shapes. The preliminary user study results suggest that the developed framework is intuitive to use and able to create a variety of 3D conceptual models.


2014 ◽  
Vol 926-930 ◽  
pp. 1517-1521
Author(s):  
Xiang Jin Wang ◽  
Guo Dong Li ◽  
Zhi Lu Zhang ◽  
Zhe Li

This paper takes the light geodesic instrument as the research object, puts forward a design idea of the semi-physical simulation training system based on the virtual scene and realizes three-dimensional modeling, real-time scene drawing and real-time data driving display through Virtools and Visual C++. ARM7 and the general-purpose single-chip microcomputer are adopted to realize the function simulation of the equipment. This simulation training system has the characteristics of low cost, low power consumption and high simulation degree.


Author(s):  
Ismail Elkhrachy

This paper analyses and evaluate the precision and the accuracy the capability of low-cost terrestrial photogrammetry by using many digital cameras to construct a 3D model of an object. To obtain the goal, a building façade has imaged by two inexpensive digital cameras such as Canon and Pentax camera. Bundle adjustment and image processing calculated by using Agisoft PhotScan software. Several factors will be included during this study, different cameras, and control points. Many photogrammetric point clouds will be generated. Their accuracy will be compared with some natural control points which collected by the laser total station of the same building. The cloud to cloud distance will be computed for different comparison 3D models to investigate different variables. The practical field experiment showed a spatial positioning reported by the investigated technique was between 2-4cm in the 3D coordinates of a façade. This accuracy is optimistic since the captured images were processed without any control points.


2019 ◽  
Vol 16 (161) ◽  
pp. 20190674 ◽  
Author(s):  
Nuria Melisa Morales-García ◽  
Thomas D. Burgess ◽  
Jennifer J. Hill ◽  
Pamela G. Gill ◽  
Emily J. Rayfield

Finite-element (FE) analysis has been used in palaeobiology to assess the mechanical performance of the jaw. It uses two types of models: tomography-based three-dimensional (3D) models (very accurate, not always accessible) and two-dimensional (2D) models (quick and easy to build, good for broad-scale studies, cannot obtain absolute stress and strain values). Here, we introduce extruded FE models, which provide fairly accurate mechanical performance results, while remaining low-cost, quick and easy to build. These are simplified 3D models built from lateral outlines of a relatively flat jaw and extruded to its average width. There are two types: extruded (flat mediolaterally) and enhanced extruded (accounts for width differences in the ascending ramus). Here, we compare mechanical performance values resulting from four types of FE models (i.e. tomography-based 3D, extruded, enhanced extruded and 2D) in Morganucodon and Kuehneotherium . In terms of absolute values, both types of extruded model perform well in comparison to the tomography-based 3D models, but enhanced extruded models perform better. In terms of overall patterns, all models produce similar results. Extruded FE models constitute a viable alternative to the use of tomography-based 3D models, particularly in relatively flat bones.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3952 ◽  
Author(s):  
* ◽  
*

Three Dimensional (3D) models are widely used in clinical applications, geosciences, cultural heritage preservation, and engineering; this, together with new emerging needs such as building information modeling (BIM) develop new data capture techniques and devices with a low cost and reduced learning curve that allow for non-specialized users to employ it. This paper presents a simple, self-assembly device for 3D point clouds data capture with an estimated base price under €2500; furthermore, a workflow for the calculations is described that includes a Visual SLAM-photogrammetric threaded algorithm that has been implemented in C++. Another purpose of this work is to validate the proposed system in BIM working environments. To achieve it, in outdoor tests, several 3D point clouds were obtained and the coordinates of 40 points were obtained by means of this device, with data capture distances ranging between 5 to 20 m. Subsequently, those were compared to the coordinates of the same targets measured by a total station. The Euclidean average distance errors and root mean square errors (RMSEs) ranging between 12–46 mm and 8–33 mm respectively, depending on the data capture distance (5–20 m). Furthermore, the proposed system was compared with a commonly used photogrammetric methodology based on Agisoft Metashape software. The results obtained demonstrate that the proposed system satisfies (in each case) the tolerances of ‘level 1’ (51 mm) and ‘level 2’ (13 mm) for point cloud acquisition in urban design and historic documentation, according to the BIM Guide for 3D Imaging (U.S. General Services).


2015 ◽  
Vol 100 ◽  
pp. 55-62 ◽  
Author(s):  
Akihiro Nakamura ◽  
Hiroyuki Funaya ◽  
Naohiro Uezono ◽  
Kinichi Nakashima ◽  
Yasumasa Ishida ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document