Optical scanner calibration by object-image planar mapping and 3-D object-shape reconstruction accuracy

1992 ◽  
Vol 25 (6) ◽  
pp. 683 ◽  
Author(s):  
J. Kofman ◽  
B. Lindström ◽  
A. Karlsson ◽  
K. Öberg
2001 ◽  
Author(s):  
Dong Xu ◽  
LiangZheng Xia ◽  
Shizhou Yang

2019 ◽  
Vol 16 (2(SI)) ◽  
pp. 0504 ◽  
Author(s):  
Abu Bakar Et al.

Zernike Moments has been popularly used in many shape-based image retrieval studies due to its powerful shape representation. However its strength and weaknesses have not been clearly highlighted in the previous studies. Thus, its powerful shape representation could not be fully utilized. In this paper, a method to fully capture the shape representation properties of Zernike Moments is implemented and tested on a single object for binary and grey level images. The proposed method works by determining the boundary of the shape object and then resizing the object shape to the boundary of the image. Three case studies were made. Case 1 is the Zernike Moments implementation on the original shape object image. In Case 2, the centroid of the shape object image in Case 1 is relocated to the center of the image. In Case 3, the proposed method first detect the outer boundary of the shape object and then resizing the object to the boundary of the image. Experimental investigations were made by using two benchmark shape image datasets showed that the proposed method in Case 3 had demonstrated to provide the most superior image retrieval performances as compared to both the Case 1 and Case 2. As a conlusion, to fully capture the powerful shape representation properties of the Zernike moment, a shape object should be resized to the boundary of the image.


2018 ◽  
Vol 15 (01) ◽  
pp. 1850015 ◽  
Author(s):  
Simon Ottenhaus ◽  
Lukas Kaul ◽  
Nikolaus Vahrenkamp ◽  
Tamim Asfour

Active tactile perception is a powerful mechanism to collect contact information by touching an unknown object with a robot finger in order to enable further interaction with the object or grasping of the object. The acquired object knowledge can be used to build object shape models based on such usually sparse tactile contact information. In this paper, we address the problem of object shape reconstruction from sparse tactile data gained from a robot finger that yields contact information and surface orientation at the contact points. To this end, we present an exploration algorithm which determines the next best touch target in order to maximize the estimated information gain and to minimize the expected costs of exploration actions. We introduce the Information Gain Estimation Function (IGEF), which combines different goals as measure for the quantification of the cost-aware information gain during exploration. The IGEF-based exploration strategy is validated in simulation using 48 publicly available object models and compared to state-of-the-art Gaussian processes-based exploration approaches. The results show the performance of the approach regarding exploration efficiency, cost-awareness and suitability for application in real tactile sensing scenarios.


Author(s):  
Mohamed Ibrahim Shujaa ◽  
Ammar Alauldeen Abdulmajeed

<p>This paper considers a 2D image depth estimation of an object and reconstructed it into a 3D object image. The 2D image is defined by slices contains asset of points that are located along the object contours and within the object body. The depth of these slices are estimated using the neural network technique (N.N), where five factors (slice length, angle of incident light and illumination of some of point that located along the 2D object, namely control points)are used as inputs to the network the estimated depth of the slice are mapped into a 3D surface using the interpolation technique of the Bezier spleen surface. The experimental results showed an effective performance of the proposed approach.</p>


2020 ◽  
Vol 10 (17) ◽  
pp. 5803
Author(s):  
Yiping Zhang ◽  
Xinzhe Que ◽  
Mengxian Hu ◽  
Yongchao Zhou

This work proposed a method to reconstruct the 3D bubble shape in a transparent medium utilizing the three orthographic digital images. The bubble was divided into several ellipse slices. The azimuth angle and projection parameters were extracted from the top view image, while the formulas for dimensionless semi-axes were derived according to the geometric projection relationship. The elliptical axes of each layer were calculated by substituting the projection width into the formulas. All layers of slices were stacked to form the 3D bubble shape. Reconstruction accuracy was evaluated with spheres, ellipsoids, and inverted teardrops. The results show that the position contributes greatly to the reconstruction accuracy of the bubbles with serious horizontal deformation. The method in Bian et al. (2013) is sensitive to both horizontal and vertical deformations. The vertical deformation has little influence on the method in Fujiwara et al. (2004), whereas the horizontal deformation greatly impacts its accuracy. The method in this paper is negligibly affected by vertical deformation, but it does better in reconstructing single bubbles with large horizontal deformation. The azimuth angle affects the accuracy of the methods in Bian et al. (2013) and Fujiwara et al. (2004) more than the method in this paper.


Sign in / Sign up

Export Citation Format

Share Document