scholarly journals Depth sensing with disparity space images and three-dimensional recursive search

Automatika ◽  
2018 ◽  
Vol 59 (2) ◽  
pp. 131-142
Author(s):  
Miroslav Rožić ◽  
Tomislav Pribanić
2014 ◽  
Vol 70 (a1) ◽  
pp. C683-C683
Author(s):  
Irfan Kuvvetli ◽  
Carl Budtz-Jørgensen ◽  
Natalia Auricchio ◽  
John Stephen ◽  
Ezio Caroli ◽  
...  

A high resolution three dimensional (3D) position sensitive CdZnTe-based detector was developed to detect high energy photons (10 keV-1MeV). The design of the 3D CZT detector, developed at DTU Space, is based on the CZT Drift Strip detector principle. The prototype detector contains 12 drift cells, each comprising one collecting anode strip with 3 drift strips, biased such that the electrons are focused and collected by the anode strips. The anode pitch is 1.6mm. The position determination perpendicular to the anodes, the X-direction, is performed using a novel interpolating technique. The position determination along the detector depth direction, Y-direction, is made using the depth sensing technique. The position determination along the anode strips, Z-direction is made with the help of 10 cathode strips orthogonal to the anode strips. REDLEN CZT crystals (20 mm x 20 mm x 5 mm) were used for the proto type detectors. IMEM-CNR fabricated the proto type detectors using a special surface treatment method and electrode attachment process. A novel method was applied to reduce the surface leakage current between the strips. The proto type detector was investigated at the European Synchrotron Radiation Facility, Grenoble which provided a fine 50 x 50 μm2 collimated X-ray beam covering an energy band up to 600 keV. At 400 keV we measured position resolutions of 0.2 mm FWHM in the X- and Y-direction and 0.6 mm FWHM in the Z-direction. The measured energy resolution of the detector was ~5.5 keV FWHM at 400 keV. The electronic noise contribution of the detector setup was 3.7 keV FWHM . The detector provides 3D position with very good spatial resolution as well as high resolution energy information and is therefore a well suited candidate e.g. as a Compton telescope detector, or for any application fields (medicine, security, science) where imaging and spectroscopy of high energy photons in the 10keV-1MeV range are required.


MRS Bulletin ◽  
2004 ◽  
Vol 29 (3) ◽  
pp. 177-181 ◽  
Author(s):  
Ian K. Robinson ◽  
Jianwei Miao

AbstractX-rays have been widely used in the structural analysis of materials because of their significant penetration ability, at least on the length scale of the granularity of most materials. This allows, in principle, for fully three-dimensional characterization of the bulk properties of a material. One of the main advantages of x-ray diffraction over electron microscopy is that destructive sample preparation to create thin sections is often avoidable. A major disadvantage of x-ray diffraction with respect to electron microscopy is its inability to produce real-space images of the materials under investigation—there are simply no suitable lenses available. There has been significant progress in x-ray microscopy associated with the development of lenses, usually based on zone plates, Kirkpatrick–Baez mirrors, or compound refractive lenses. These technologies are far behind the development of electron optics, particularly for the large magnification ratios needed to attain high resolution. In this article, the authors report progress toward the development of an alternative general approach to imaging, the direct inversion of diffraction patterns by computation methods. By avoiding the use of an objective lens altogether, the technique is free from aberrations that limit the resolution, and it can be highly efficient with respect to radiation damage of the samples. It can take full advantage of the three-dimensional capability that comes from the x-ray penetration. The inversion step employs computational methods based on oversampling to obtain a general solution of the diffraction phase problem.


Author(s):  
Sree Shankar ◽  
Rahul Rai

AbstractPrimary among all the activities involved in conceptual design is freehand sketching. There have been significant efforts in recent years to enable digital design methods that leverage humans’ sketching skills. Conventional sketch-based digital interfaces are built on two-dimensional touch-based devices like sketchers and drawing pads. The transition from two-dimensional to three-dimensional (3-D) digital sketch interfaces represents the latest trend in developing new interfaces that embody intuitiveness and human–human interaction characteristics. In this paper, we outline a novel screenless 3-D sketching system. The system uses a noncontact depth-sensing RGB-D camera for user input. Only depth information (no RGB information) is used in the framework. The system tracks the user's palm during the sketching process and converts the data into a 3-D sketch. As the generated data is noisy, making sense of what is sketched is facilitated through a beautification process that is suited to 3-D sketches. To evaluate the performance of the system and the beautification scheme, user studies were performed on multiple participants for both single-stroke and multistroke sketching scenarios.


2019 ◽  
Vol 17 (2) ◽  
pp. 220-235 ◽  
Author(s):  
Bess Krietemeyer ◽  
Amber Bartosh ◽  
Lorne Covington

This article presents the ongoing development and testing of a “shared realities” computational workflow to support iterative user-centered design with an interactive system. The broader aim is to address the challenges associated with observing and recording user interactions within the context of use for improving the performance of an interactive system. A museum installation is used as an initial test bed to validate the following hypothesis: by integrating three-dimensional depth sensing and virtual reality for interaction design and user behavior observations, the shared realities workflow provides an iterative feedback loop that allows for remote observations and recordings for faster and effective decision-making. The methods presented focus on the software development for gestural interaction and user point cloud observations, as well as the integration of virtual reality tools for iterative design of the interface and system performance assessment. Experimental testing demonstrates viability of the shared realities workflow for observing and recording user interaction behaviors and evaluating system performance. Contributions to computational design, technical challenges, and ethical considerations are discussed, as well as directions for future work.


Author(s):  
Quentin Kevin Gautier ◽  
Thomas G. Garrison ◽  
Ferrill Rushton ◽  
Nicholas Bouck ◽  
Eric Lo ◽  
...  

PurposeDigital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.Design/methodology/approachThe authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.FindingsWhile not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.Practical implicationsThe proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.Originality/valueThis article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.


Author(s):  
M. TAKEI ◽  
M. OCHI ◽  
Y. SAITO ◽  
K. HORII

Particle concentration distribution images of a dense solid-air two-phase (plug) flow have been obtained at 10 ms intervals at a bend pipe upstream in a horizontal pipeline by means of a capacitance computed tomography. The three-dimensional images (time and two-dimensional space images) have been decomposed to the wavelet time levels to extract the dominant particle concentration distribution using three-dimensional discrete wavelet multiresolution. As a result, the time dominant particle distribution with specific time frequency level is visualized in a cross-section. In detail, the high concentration of the particle spatial distribution at the dense flow front, which composes high-time frequency levels 6 and 7, is located at the center above the stationary layer.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1021 ◽  
Author(s):  
Patrick Hübner ◽  
Kate Clintworth ◽  
Qingyi Liu ◽  
Martin Weinmann ◽  
Sven Wursthorn

The Microsoft HoloLens is a head-worn mobile augmented reality device that is capable of mapping its direct environment in real-time as triangle meshes and localize itself within these three-dimensional meshes simultaneously. The device is equipped with a variety of sensors including four tracking cameras and a time-of-flight (ToF) range camera. Sensor images and their poses estimated by the built-in tracking system can be accessed by the user. This makes the HoloLens potentially interesting as an indoor mapping device. In this paper, we introduce the different sensors of the device and evaluate the complete system in respect of the task of mapping indoor environments. The overall quality of such a system depends mainly on the quality of the depth sensor together with its associated pose derived from the tracking system. For this purpose, we first evaluate the performance of the HoloLens depth sensor and its tracking system separately. Finally, we evaluate the overall system regarding its capability for mapping multi-room environments.


Sign in / Sign up

Export Citation Format

Share Document