AN ITERATIVE CLUSTERING PROCEDURE FOR INTERPRETING AN IMPERFECT LINE DRAWING

Author(s):  
RONALD CHUNG ◽  
KIN-LAP LEUNG

Recovering three-dimensional shape of an object from a single line drawing is a classical problem in computer vision. Methods proposed range from Huffman–Clowes junction labeling, to Kanade's gradient space and skew symmetry analysis, to Sugihara's necessary and sufficient condition for a realizable polyhedral object, to Marill's MSDA shape recovery procedure, to Leclerc–Fischler's shape recovery procedure which assures planar faces, and to the recent Baird–Wang's gradient-descent algorithm which has a favorable time complexity. Yet all these assume perfect line drawings as the input. We propose a method that through the use of iterative clustering interprets an imperfect line drawing of a polyhedral scene. It distinguishes the true surface boundaries from the extraneous ones like the surface markings, fill-in the missing surface boundaries, and recovers 3-D shapes satisfying constraints of planarity of faces and parallel symmetry of lines, all at the same time. Experiments also show that the 3-D interpretation agrees with human perception.

2015 ◽  
Vol 21 (2) ◽  
pp. 442-458 ◽  
Author(s):  
Ik-Hyun Lee ◽  
Muhammad Tariq Mahmood ◽  
Tae-Sun Choi

AbstractShape from focus (SFF) is a passive optical technique that reconstructs object shape from a sequence of image taken at different focus levels. In SFF techniques, computing focus measurement for each pixel in the image sequence, through a focus measure operator, is the fundamental step. Commonly used focus measure operators compute focus quality in Cartesian space and suffer from erroneous focus quality and lack in robustness. Thus, they provide erroneous depth maps. In this paper, we introduce a new focus measure operator that computes focus quality in log-polar transform (LPT) Properties of LPT, such as biological inspiration, data selection, and edge invariance, enable computation of better focus quality in the presence of noise. Moreover, instead of using a fixed patch of the image, we suggest the use of an adaptive window. The focus quality is assessed by computing variation in LPT. The effectiveness of the proposed technique is evaluated by conducting experiments using image sequences of different simulated and real objects. The comparative analysis shows that the proposed method is robust and effective in the presence of various types of noise.


Author(s):  
Sanjay Bakshi ◽  
Yee-Hong Yang

Due to the complexity of the shape-from-shading problem, most solutions rely on idealistic conditions. Orthographic imaging, a known distant point light source, and known surface reflectance properties are usually assumed. Furthermore, most real surfaces are neither perfectly diffuse (Lambertian) nor ideally specular (mirror-like); however most shape-from-shading algorithms assume Lambertian reflectance. The behavior of shape-from-shading algorithms that rely on idealistic conditions is unpredictable in real imaging situations. In this paper, the LIRAS (LIght, Reflectance, And Shape) Recovery System is proposed. LIRAS is a practical approach to the shape-from-shading problem, as many of these assumptions are relaxed. LIRAS is also a modular system: there is a component that recovers the surface reflectance properties, thus the assumption of Lambertian reflectance is relaxed. Rather than assume a known illuminant direction, a component exists that can recover the light orientation. Once the reflectance map is determined, another LIRAS module can use this information to recover the shape for non-Lambertian surfaces. Each of these modules is described and a discussion of how the components cooperate to recover three-dimensional shape information in real environments is given. Extensive experimental evaluation is conducted using both synthetic and real images and the results are very encouraging. The contributions of this paper include the design and implementation of LIRAS and the extensive quantative and qualitative experimental results, which can provide guidelines for future refinements of other shape recovery systems.


Perception ◽  
1993 ◽  
Vol 22 (11) ◽  
pp. 1271-1285
Author(s):  
Tatiana Tambouratzis ◽  
Michael J Wright

In a series of experiments, subjects were asked to make judgments concerning the three-dimensional constructibility of line drawings depicting possible and impossible objects. A spectrum of objects was employed in which complexity as well as, for impossible objects, the cause and saliency of the contradiction in three-dimensional structure varied widely. The line drawings were presented under varying viewing conditions and exposure times. It was found that line drawings of possible objects were more often correctly identified than those of impossible ones. Parallel (simultaneous) viewing was more efficient than serial viewing (in which a line drawing moved behind a narrow stationary aperture). The orientation of the aperture did not cause differences in the subjects' performance. Line-drawing complexity and contradiction in three-dimensional structure were not found to be significant for accurate recognition. Finally, no consistent effect of exposure duration on performance could be determined in the range 60–1000 ms.


2021 ◽  
Vol 11 (16) ◽  
pp. 7536
Author(s):  
Kyungho Yu ◽  
Juhyeon Noh ◽  
Hee-Deok Yang

Recently, three-dimensional (3D) content used in various fields has attracted attention owing to the development of virtual reality and augmented reality technologies. To produce 3D content, we need to model the objects as vertices. However, high-quality modeling is time-consuming and costly. Drawing-based modeling is a technique that shortens the time required for modeling. It refers to creating a 3D model based on a user’s line drawing, which is a 3D feature represented by two-dimensional (2D) lines. The extracted line drawing provides information about a 3D model in the 2D space. It is sometimes necessary to generate a line drawing from a 2D cartoon image to represent the 3D information of a 2D cartoon image. The extraction of consistent line drawings from 2D cartoons is difficult because the styles and techniques differ depending on the designer who produces the 2D cartoons. Therefore, it is necessary to extract line drawings that show the geometric characteristics well in 2D cartoon shapes of various styles. This paper proposes a method for automatically extracting line drawings. The 2D cartoon shading image and line drawings are learned using a conditional generative adversarial network model, which outputs the line drawings of the cartoon artwork. The experimental results show that the proposed method can obtain line drawings representing the 3D geometric characteristics with a 2D line when a 2D cartoon painting is used as the input.


Sign in / Sign up

Export Citation Format

Share Document