scholarly journals One-Shot M-Array Pattern Based on Coded Structured Light for Three-Dimensional Object Reconstruction

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xiaojun Jia ◽  
Zihao Liu

Pattern encoding and decoding are two challenging problems in a three-dimensional (3D) reconstruction system using coded structured light (CSL). In this paper, a one-shot pattern is designed as an M-array with eight embedded geometric shapes, in which each 2 × 2 subwindow appears only once. A robust pattern decoding method for reconstructing objects from a one-shot pattern is then proposed. The decoding approach relies on the robust pattern element tracking algorithm (PETA) and generic features of pattern elements to segment and cluster the projected structured light pattern from a single captured image. A deep convolution neural network (DCNN) and chain sequence features are used to accurately classify pattern elements and key points (KPs), respectively. Meanwhile, a training dataset is established, which contains many pattern elements with various blur levels and distortions. Experimental results show that the proposed approach can be used to reconstruct 3D objects.

Author(s):  
Yang Qi ◽  
◽  
Yuan Li

Efficient and precise three-dimensional (3D) measurement is an important issue in the field of machine vision. In this paper, a measurement method for indoor key points is proposed with structured lights and omnidirectional vision system and the system can achieve the wide field of view and accurate results. In this paper, the process of obtaining indoor key points is as follows: Firstly, through the analysis of the system imaging model, an omnidirectional vision system based on structured light is constructed. Secondly, the full convolution neural network is used to estimate the scene for the dataset. Then, according to the geometric relationship between the scenery point and its reference point in structured light, for obtaining the 3D coordinates of the unstructured light point is presented. Finally, combining the full convolution network model and the structured light 3D vision model, the 3D mathematical representation of the key points of the indoor scene frame is completed. The experimental results proved that the proposed method can accurately reconstruct indoor scenes, and the measurement error is about 2%.


Author(s):  
Mamta H. Wankhade ◽  
Satish G. Bahaley

<p>3D printing is a form of additive manufacturing technology where a three dimensional object is created by laying down successive layers of material. It is mechanized method whereby 3D objects are quickly made on a reasonably sized machine connected to a computer containing blueprints for the object. As 3D printing is growing fast and giving a boost to product development, the factories doing 3D printing need to continuously meet the printing requirements and maintain an adequate amount of inventory of the filament. As the manufactures have to buy these filaments from various vendors, the cost of 3D printing increases. To overcome the problem faced by the manufacturers, small workshop owners, the need of 3D filament making machine arises. This project focuses on designing and fabricating a portable fused deposition 3D printer filament making machine with cheap and easily available components to draw 1.75 mm diameter ABS filament.</p>


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6097
Author(s):  
Taichu Shi ◽  
Yang Qi ◽  
Cheng Zhu ◽  
Ying Tang ◽  
Ben Wu

In this paper, we propose and experimentally demonstrate a three-dimensional (3D) microscopic system that reconstructs a 3D image based on structured light illumination. The spatial pattern of the structured light changes according to the profile of the object, and by measuring the change, a 3D image of the object is reconstructed. The structured light is generated with a digital micro-mirror device (DMD), which controls the structured light pattern to change in a kHz rate and enables the system to record the 3D information in real time. The working distance of the imaging system is 9 cm at a resolution of 20 μm. The resolution, working distance, and real-time 3D imaging enable the system to be applied in bridge and road crack examinations, and structure fault detection of transportation infrastructures.


Author(s):  
F. W. TO ◽  
K. M. TSANG

The analysis and recognition of 2D shapes using the orthogonal complex AR model has been extended for the recognition of arbitrary 3D objects. A 3D object is placed at one of its stable orientation and sectioned into a fixed number of "slices" of equal thickness in such a way that the "slices" are parallel to the object's stable plane. The surface of an object can be represented by a sequence of these parallel 2D closed contours. A complex AR model is then fitted to each of these contours. An orthogonal estimator is implemented to determine the correct model order and to estimate the associated model parameters. The estimated AR model parameters, magnitude ratios and the relative centroid associated with each 2D contour are used as essential features for 3D object recognition. An algorithm with hierarchical structure for the recognition of 3D objects is derived based on matching the sequence of 2D contours. Simulation studies are included to show the effectiveness of different criteria being applied at different stages of the recognition process. Test results have shown that the proposed approach can provide a feasible and effective means for recognizing arbitrary 3D objects which can be self-occluded and have a number of stable orientation.


1995 ◽  
Vol 7 (2) ◽  
pp. 408-423 ◽  
Author(s):  
Shimon Edelman

How does the brain represent visual objects? In simple perceptual generalization tasks, the human visual system performs as if it represents the stimuli in a low-dimensional metric psychological space (Shepard 1987). In theories of three-dimensional (3D) shape recognition, the role of feature-space representations [as opposed to structural (Biederman 1987) or pictorial (Ullman 1989) descriptions] has long been a major point of contention. If shapes are indeed represented as points in a feature space, patterns of perceived similarity among different objects must reflect the structure of this space. The feature space hypothesis can then be tested by presenting subjects with complex parameterized 3D shapes, and by relating the similarities among subjective representations, as revealed in the response data by multidimensional scaling (Shepard 1980), to the objective parameterization of the stimuli. The results of four such tests, accompanied by computational simulations, support the notion that discrimination among 3D objects may rely on a low-dimensional feature space representation, and suggest that this space may be spanned by explicitly encoded class prototypes.


Science ◽  
2019 ◽  
Vol 363 (6431) ◽  
pp. 1075-1079 ◽  
Author(s):  
Brett E. Kelly ◽  
Indrasen Bhattacharya ◽  
Hossein Heidari ◽  
Maxim Shusteff ◽  
Christopher M. Spadaccini ◽  
...  

Additive manufacturing promises enormous geometrical freedom and the potential to combine materials for complex functions. The speed, geometry, and surface quality limitations of additive processes are linked to their reliance on material layering. We demonstrated concurrent printing of all points within a three-dimensional object by illuminating a rotating volume of photosensitive material with a dynamically evolving light pattern. We printed features as small as 0.3 millimeters in engineering acrylate polymers and printed soft structures with exceptionally smooth surfaces into a gelatin methacrylate hydrogel. Our process enables us to construct components that encase other preexisting solid objects, allowing for multimaterial fabrication. We developed models to describe speed and spatial resolution capabilities and demonstrated printing times of 30 to 120 seconds for diverse centimeter-scale objects.


2003 ◽  
Vol 11 (5) ◽  
pp. 406 ◽  
Author(s):  
C. Guan ◽  
L. G. Hassebrook ◽  
D. L. Lau

Author(s):  
Elrnar Zeitler

Considering any finite three-dimensional object, a “projection” is here defined as a two-dimensional representation of the object's mass per unit area on a plane normal to a given projection axis, here taken as they-axis. Since the object can be seen as being built from parallel, thin slices, the relation between object structure and its projection can be reduced by one dimension. It is assumed that an electron microscope equipped with a tilting stage records the projectionWhere the object has a spatial density distribution p(r,ϕ) within a limiting radius taken to be unity, and the stage is tilted by an angle 9 with respect to the x-axis of the recording plane.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document