A Study on the Practicalities of Using Volumetric Three-Dimensional Imaging Devices in Design

Author(s):  
Eliab Z. Opiyo ◽  
Imre Horva´th

Standard two-dimensional (2D) computer displays are traditionally used in engineering design to display the three-dimensional (3D) images generated by computer-aided design and engineering (CAD/CAE) systems. These displays serve primarily as passive visualization tools. The interaction with the displayed images on these devices is only possible through archaic 2D peripheral input devices such as keyboards and mice; via the Windows, Icons, Menus and Pointing (WIMP) style graphical user interfaces. It is widely acknowledged in the design community that such visualization and interaction methods do not match the way the designers think and work. Overall, the emerging volumetric 3D displays are seen as the obvious replacement of flat displays in future. This paper explores the possibility of stepping beyond the present 2D desktop computer monitors, and investigate the practicalities of using the emerging volumetric 3D displays, coupled with non encumbering natural interaction means such as gestures, hand motions and haptics for designing in 3D space. We first explore the need for spatial visualization and interaction in design, and outline how the volumetric 3D imaging devices could be used in design. We then review the existing volumetric 3D display configurations, and investigate how they would assist designing in 3D space. Next, we present the study we conducted to seek views of the designers on what kind of volumetric 3D display configuration would more likely match their needs. We finally highlight what would be the consequences and benefits of using volumetric 3D displays instead of the canonical flat screen displays and 2D input devices in design. It has been established that the designers who participated as subjects in the above-mentioned preliminary field study feel that dome-shaped and aerial volumetric 3D imaging devices, which allow for both visualization and interaction with virtual objects, are the imaging options that would not only better suit their visualization and interaction needs, but would also satisfy most of the usability requirements. However, apart from dealing with the remaining basic technological gaps, the challenge is also on how to combine the prevailing proven CAD/CAE technologies and the emerging interaction technologies with the emerging volumetric 3D imaging technologies. As a result of turning to volumetric 3D imaging devices, there is also the challenge of putting in place a formal methodology for designing in 3D space by using these devices.

2021 ◽  
Author(s):  
Marius Fechter ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractVirtual and augmented reality allows the utilization of natural user interfaces, such as realistic finger interaction, even for purposes that were previously dominated by the WIMP paradigm. This new form of interaction is particularly suitable for applications involving manipulation tasks in 3D space, such as CAD assembly modeling. The objective of this paper is to evaluate the suitability of natural interaction for CAD assembly modeling in virtual reality. An advantage of the natural interaction compared to the conventional operation by computer mouse would indicate development potential for user interfaces of current CAD applications. Our approach bases on two main elements. Firstly, a novel natural user interface for realistic finger interaction enables the user to interact with virtual objects similar to physical ones. Secondly, an algorithm automatically detects constraints between CAD components based solely on their geometry and spatial location. In order to prove the usability of the natural CAD assembly modeling approach in comparison with the assembly procedure in current WIMP operated CAD software, we present a comparative user study. Results show that the VR method including natural finger interaction significantly outperforms the desktop-based CAD application in terms of efficiency and ease of use.


Author(s):  
Tushar H. Dani ◽  
Rajit Gadh

Abstract Despite advances in Computer-Aided Design (CAD) and the evolution of the graphical user interfaces, rapid creation, editing and visualization of three-dimensional (3D) shapes remains a tedious task. Though the availability of Virtual Reality (VR)-based systems allows enhanced three-dimensional interaction and visualization, the use of VR for ab initio shape design, as opposed to ‘importing’ models from existing CAD systems, is a relatively new area of research. Of interest are computer-human interaction issues and the design and geometric tools for shape modeling in a Virtual Environment (VE). The focus of this paper is on the latter i.e. in defining the geometric tools required for a VR-CAD system and in describing a framework that meets those requirements. This framework, the Virtual Design Software Framework (VDSF) consists of the interaction and design tools, and an underlying geometric engine that provides the representation and algorithms required by these tools. The geometric engine called the Virtual Modeler uses a graph-based representation (Shape-Graph) for modeling the shapes created by the user. The Shape-Graph facilitates interactive editing by localizing the effect of editing operations and in addition provides constraint-based design and editing mechanisms that are useful in a 3D interactive virtual environment. The paper concludes with a description of the prototype system, called the Virtual Design Studio (VDS), that is currently being implemented.1.


2013 ◽  
Vol 4 (3) ◽  
pp. 118-123
Author(s):  
Lauren Gardner ◽  
Toby Gillgrass ◽  
Mark Devlin

Three-dimensional (3D) imaging is revolutionising patient assessment, diagnosis, management and treatment planning. Restorative dentistry is using optical scanning such as the computer aided design/computer aided manufacture systems to help with tooth preparation design and construction of fixed prosthodontics. Other specialties in dentistry are frequently employing cone beam computed tomography (CBCT) to facilitate 3D imaging. This article outlines how CBCT and 3D sterophotogrammetry have been used in the management of cleft lip and palate with reference to the cleft team based at Glasgow Dental Hospital.


2001 ◽  
Vol 123 (11) ◽  
pp. 60-62
Author(s):  
Jean Thilmany

This article reviews that the rate of discovery obtained from an experiment or a computational model is enhanced and accelerated by using parallel computing techniques, visualization algorithms, and advanced visualization hardware. The National Institute of Standards and Technology (NIST) in Gaithersburg, MD, team believe that high-performance computing speeds discovery within the sciences. It defines advanced computing methods as those technologies that possess capabilities beyond current state-of-the-art desktop computing. Visualization tools, for example, now extend beyond the three-dimensional computer-aided design model viewable on a desktop computer to include virtual reality software and hardware. A cave automatic virtual environment, called a CAVE, features four walls onto which an image is projected in 3D so that engineers feel they are standing in front of an object. Researchers at Iowa State and NIST’s engineers both say the future of technology won't happen without advanced computing methods, including visualization, virtual reality, and parallel computing.


2019 ◽  
Vol 9 (6) ◽  
pp. 1182 ◽  
Author(s):  
Hongyue Gao ◽  
Fan Xu ◽  
Jicheng Liu ◽  
Zehang Dai ◽  
Wen Zhou ◽  
...  

In this paper, we propose a holographic three-dimensional (3D) head-mounted display based on 4K-spatial light modulators (SLMs). This work is to overcome the limitation of stereoscopic 3D virtual reality and augmented reality head-mounted display. We build and compare two systems using 2K and 4K SLMs with pixel pitches 8.1 μm and 3.74 μm, respectively. One is a monocular system for each eye, and the other is a binocular system using two tiled SLMs for two eyes. The viewing angle of the holographic head-mounted 3D display is enlarged from 3.8 ∘ to 16.4 ∘ by SLM tiling, which demonstrates potential applications of true 3D displays in virtual reality and augmented reality.


Author(s):  
Mária Babicsné-Horváth ◽  
Károly Hercegfi

Eye-tracking based usability testing and User Experience (UX) research are widespread in the development processes of various types of software; however, there exist specific difficulties during usability tests of three-dimensional (3D) software. Analysing the screen records with gaze plots, heatmaps of fixations, and statistics of Areas of Interests (AOI), methodological problems occur when the participant wants to rotate, zoom, or move the 3D space. The data gained regarded the menu bar is mainly interpretable; however, the data regarded the 3D environment is hardly so, or not at all. Our research tested four software applications with the aforementioned problem in mind: ViveLab and Jack Digital Human Modelling (DHM) and ArchiCAD and CATIA Computer Aided Design (CAD) software. Our original goal was twofold. Firstly, with these usability tests, we aimed to identify issues in the software. Secondly, we tested the utility of a new methodology which was included in the tests. This paper summarizes the results on the methodology based on individual experiments with different software applications. One of the main ideas behind the methodology adopted is to tell the participants (during certain subtasks of the tests) not to move the 3D space while they perform the given tasks at a certain point in the usability test. During the experiments, we applied a Tobii eye-tracking device, and after the task completion, each participant was interviewed. Based on these experiences, the methodology appears to be both useful and applicable, and its visualisation techniques for one or more participants are interpretable.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Jianyu Hua ◽  
Erkai Hua ◽  
Fengbin Zhou ◽  
Jiacheng Shi ◽  
Chinhua Wang ◽  
...  

AbstractGlasses-free three-dimensional (3D) displays are one of the game-changing technologies that will redefine the display industry in portable electronic devices. However, because of the limited resolution in state-of-the-art display panels, current 3D displays suffer from a critical trade-off among the spatial resolution, angular resolution, and viewing angle. Inspired by the so-called spatially variant resolution imaging found in vertebrate eyes, we propose 3D display with spatially variant information density. Stereoscopic experiences with smooth motion parallax are maintained at the central view, while the viewing angle is enlarged at the periphery view. It is enabled by a large-scale 2D-metagrating complex to manipulate dot/linear/rectangular hybrid shaped views. Furthermore, a video rate full-color 3D display with an unprecedented 160° horizontal viewing angle is demonstrated. With thin and light form factors, the proposed 3D system can be integrated with off-the-shelf purchased flat panels, making it promising for applications in portable electronics.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takashi Nishitsuji ◽  
Takashi Kakue ◽  
David Blinder ◽  
Tomoyoshi Shimobaba ◽  
Tomoyoshi Ito

AbstractHolography is a promising technology for photo-realistic three-dimensional (3D) displays because of its ability to replay the light reflected from an object using a spatial light modulator (SLM). However, the enormous computational requirements for calculating computer-generated holograms (CGHs)—which are displayed on an SLM as a diffraction pattern—are a significant problem for practical uses (e.g., for interactive 3D displays for remote navigation systems). Here, we demonstrate an interactive 3D display system using electro-holography that can operate with a consumer’s CPU. The proposed system integrates an efficient and fast CGH computation algorithm for line-drawn 3D objects with inter-frame differencing, so that the trajectory of a line-drawn object that is handwritten on a drawing tablet can be played back interactively using only the CPU. In this system, we used an SLM with 1,920 $$\times $$ × 1,080 pixels and a pixel pitch of 8 μm × 8 μm, a drawing tablet as an interface, and an Intel Core i9–9900K 3.60 GHz CPU. Numerical and optical experiments using a dataset of handwritten inputs show that the proposed system is capable of reproducing handwritten 3D images in real time with sufficient interactivity and image quality.


Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


Sign in / Sign up

Export Citation Format

Share Document