scholarly journals DOPS: Learning to Detect 3D Objects and Predict Their 3D Shapes

Author(s):  
Mahyar Najibi ◽  
Guangda Lai ◽  
Abhijit Kundu ◽  
Zhichao Lu ◽  
Vivek Rathod ◽  
...  
Keyword(s):  
2010 ◽  
Vol 159 ◽  
pp. 128-131
Author(s):  
Jiang Zhou ◽  
Xin Yu Ma

In the case of traditional 3D shape retrieval systems, the objects retrieved are based mainly on the computation of low-level features that are used to detect so-called regions-of-interest. This paper focuses on obtaining the retrieved objects in a machine understandable and intelligent manner. We explore the different kinds of semantic descriptions for retrieval of 3D shapes. Based on ontology technology, we decompose a 3D objects into meaningful parts semi-automatically. Each part can be regarded as a 3D object, and further be semantically annotated according to ontology vocabulary from the Chinese cultural relics. Three kinds of semantic models such as description semantics of domain knowledge, spatial semantics and scenario semantics, are presented for describing semantic annotations from different viewpoints. These annotated semantics can accurately grasp complete semantic descriptions of 3D shapes.


Author(s):  
J. A. Romero ◽  
L. A. Diago ◽  
C. Nara ◽  
J. Shinoda ◽  
I. Hagiwara

Creating complex 3D objects from a flat sheet of material using origami folding techniques has attracted attention in science and engineering. Here, we introduce the concept of “Norigami” that is a mixture of three Japanese words: “Nori” that means glue, “Ori” that means Folding, and “Kami”/“Gami” that means paper. Using traditional origami, spherical or other spatial object are very difficult to achieve by a robot due to the complexity of the movements involved. In Norigami complex 3D shapes can be achieved by a machine or robot mixing simple origami folding with pasting patterns. In the current work, a Norigami robot is designed and developed using Lego NXT technology in order to create a spherical object that can be mass produced.


2022 ◽  
Vol 41 (1) ◽  
pp. 1-16
Author(s):  
Jian Liu ◽  
Shiqing Xin ◽  
Xifeng Gao ◽  
Kaihang Gao ◽  
Kai Xu ◽  
...  

Wrapping objects using ropes is a common practice in our daily life. However, it is difficult to design and tie ropes on a 3D object with complex topology and geometry features while ensuring wrapping security and easy operation. In this article, we propose to compute a rope net that can tightly wrap around various 3D shapes. Our computed rope net not only immobilizes the object but also maintains the load balance during lifting. Based on the key observation that if every knot of the net has four adjacent curve edges, then only a single rope is needed to construct the entire net. We reformulate the rope net computation problem into a constrained curve network optimization. We propose a discrete-continuous optimization approach, where the topological constraints are satisfied in the discrete phase and the geometrical goals are achieved in the continuous stage. We also develop a hoist planning to pick anchor points so that the rope net equally distributes the load during hoisting. Furthermore, we simulate the wrapping process and use it to guide the physical rope net construction process. We demonstrate the effectiveness of our method on 3D objects with varying geometric and topological complexity. In addition, we conduct physical experiments to demonstrate the practicability of our method.


2017 ◽  
Vol 18 (2) ◽  
Author(s):  
D.L. ŞTEFAN ŢĂLU

<p>The purpose of this paper is to present a CAD study for generating of 3D shapes with superellipsoids, supertoroids, super cylinders and super cones based on computational geometry. To obtain the relevant geometric information concerning the shape and profile for different 3D objects the Madsie Freestyle 1.5.3 application was used. Results from this study are applied in geometric constructions and computer aided design used in engineering and sculpture design.</p>


1995 ◽  
Vol 7 (2) ◽  
pp. 408-423 ◽  
Author(s):  
Shimon Edelman

How does the brain represent visual objects? In simple perceptual generalization tasks, the human visual system performs as if it represents the stimuli in a low-dimensional metric psychological space (Shepard 1987). In theories of three-dimensional (3D) shape recognition, the role of feature-space representations [as opposed to structural (Biederman 1987) or pictorial (Ullman 1989) descriptions] has long been a major point of contention. If shapes are indeed represented as points in a feature space, patterns of perceived similarity among different objects must reflect the structure of this space. The feature space hypothesis can then be tested by presenting subjects with complex parameterized 3D shapes, and by relating the similarities among subjective representations, as revealed in the response data by multidimensional scaling (Shepard 1980), to the objective parameterization of the stimuli. The results of four such tests, accompanied by computational simulations, support the notion that discrimination among 3D objects may rely on a low-dimensional feature space representation, and suggest that this space may be spanned by explicitly encoded class prototypes.


2020 ◽  
Vol 34 (07) ◽  
pp. 11362-11369 ◽  
Author(s):  
Jun Li ◽  
Chengjie Niu ◽  
Kai Xu

Learning powerful deep generative models for 3D shape synthesis is largely hindered by the difficulty in ensuring plausibility encompassing correct topology and reasonable geometry. Indeed, learning the distribution of plausible 3D shapes seems a daunting task for the holistic approaches, given the significant topological variations of 3D objects even within the same category. Enlightened by the fact that 3D shape structure is characterized as part composition and placement, we propose to model 3D shape variations with a part-aware deep generative network, coined as PAGENet. The network is composed of an array of per-part VAE-GANs, generating semantic parts composing a complete shape, followed by a part assembly module that estimates a transformation for each part to correlate and assemble them into a plausible structure. Through delegating the learning of part composition and part placement into separate networks, the difficulty of modeling structural variations of 3D shapes is greatly reduced. We demonstrate through both qualitative and quantitative evaluations that PAGENet generates 3D shapes with plausible, diverse and detailed structure, and show two applications, i.e., semantic shape segmentation and part-based shape editing.


2002 ◽  
Vol 14 (4) ◽  
pp. 357-365
Author(s):  
Takahiro Doi ◽  
◽  
Shigeo Hirose

Recent developments in 3D sensors have raised the possibility of using them in an increasing number of engineering applications. However, since most 3D sensors, such as the laser range finder, are based on the use of light, which moves in straight lines, the measurement area is limited to the front of an object, making the back an ""invisible"" surface. To calculate such unmeasurable areas, a system that memorizes shapes often encountered in objects and superimposes them on the scene is required. To realize such a type of system, an appropriate 3D shape representation is needed. This representation should 1) be able to handle and compare partial and complete sets of data of object shapes, and 2) operate quickly enough to be applicable to real-time tasks. We developed a novel shape representation framework called ""Internal Radiated-light Projection (IRP)"" to represent and compare 3D objects. This representation projects local shape information of an object on a sphere by imaginary rays from the ""kernel"" of the object. To describe local shape information and arrange shapes properly, we propose Harmonic Contour Analysis (HCA) and the Shape Matrix. These concepts are characterized by 1) simplicity; 2) the use of local shapes and their adjacent information; and, by using the Shape Matrix, 3) the consideration of the effect of gravity and stable poses for objects. In IRP representation, we can categorize objects in known classes and calculate their positions and attitudes. This paper explains the basic concept behind IRP, which is a way of representing local 3D shapes by HCA and categorizing them using the Shape Matrix. We then present experiments in object recognition for both virtual and real objects to demonstrate its efficiency and feasibility.


2010 ◽  
Vol 159 ◽  
pp. 124-127
Author(s):  
Jiang Zhou ◽  
Xin Yu Ma

Recently, semantic based 3D object retrieval has been paid more attention to because it focuses on obtaining the retrieved objects in a machine understandable and intelligent manner. In this paper, we propose an approach for semantic based annotation of 3D shapes. To enable semantic based annotation, the method for object segmentation decomposes 3D objects into meaningful parts semi-automatically. Furthermore, each part can be regarded as a 3D object, and further be semantically annotated according to ontology vocabulary from the Chinese cultural relics. Such a segmentation and annotation provide the premise for the future retrieval of 3D shapes.


1989 ◽  
Vol 136 (2) ◽  
pp. 124
Author(s):  
Ming-Hong Chan ◽  
Hung-Tat Tsui

Sign in / Sign up

Export Citation Format

Share Document