2d sketches
Recently Published Documents


TOTAL DOCUMENTS

21
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Sophia Mouajjah ◽  
Cedric Plessiet
Keyword(s):  


2021 ◽  
pp. 26-34
Author(s):  
Yuqian Li ◽  
Weiguo Xu

AbstractArchitects usually design ideation and conception by hand-sketching. Sketching is a direct expression of the architect’s creativity. But 2D sketches are often vague, intentional and even ambiguous. In the research of sketch-based modeling, it is the most difficult part to make the computer to recognize the sketches. Because of the development of artificial intelligence, especially deep learning technology, Convolutional Neural Networks (CNNs) have shown obvious advantages in the field of extracting features and matching, and Generative Adversarial Neural Networks (GANs) have made great breakthroughs in the field of architectural generation which make the image-to-image translation become more and more popular. As the building images are gradually developed from the original sketches, in this research, we try to develop a system from the sketches to the images of buildings using CycleGAN algorithm. The experiment demonstrates that this method could achieve the mapping process from the sketches to images, and the results show that the sketches’ features could be recognised in the process. By the learning and training process of the sketches’ reconstruction, the features of the images are also mapped to the sketches, which strengthen the architectural relationship in the sketch, so that the original sketch can gradually approach the building images, and then it is possible to achieve the sketch-based modeling technology.



With increasing technological advancements, there is a need for automation in this ever-evolving world. This may result in improved efficiency, faster work and enhanced capabilities. Sketch-to-image translation is an image processing application that can be used as a helping hand in a variety of fields. One of these is the utilization of Generative Adversarial Networks to guide edges to photographs, with the assistance of image generators and discriminators who work connected to produce realistic images. We have also incorporated Histogram of Oriented Gradients (HOG) as a feature/image descriptor. The HOG technique counts the gradient orientations to differentiate the target and the background. Support Vector Machine (SVM) is the classifier used for classification. This HOG and SVM model can be improved, altered and executed as multi-program software.



2019 ◽  
Vol 13 (4) ◽  
pp. 482-489 ◽  
Author(s):  
Fumiki Tanaka ◽  
Makoto Tsuchida ◽  
Masahiko Onosato ◽  
◽  

Virtual reality (VR), augmented reality (AR), and mixed reality technologies are utilized at various stages of product lifecycle. For products with long lifecycles such as bridges and dams, the maintenance and inspection stages are very important to keep the product safe and well-functioning. One of the advantages of VR/AR is the ability to add important information such as past inspection data. Past inspection information is summarized in a document consisting of the 2D sketches of bridge degradation drawings. However, this degradation sketch is in 2D, and it has no correspondence with the 3D world. In this study, we propose a method to associate important information of 2D sketches with a 3D industry foundation classes (IFC) model, which is a standardized computer aided design model. To display a VR image of a bridge during the inspection process, the proposed method is applied to the 3D IFC model of the bridge and 2D degradation sketch of the inspection report.



2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Christian E. Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the possible biases that individuals may have toward the perceived functionality of machine generated designs, compared to human created designs. Toward this end, 1187 participants were recruited via Amazon mechanical Turk (AMT) to analyze the perceived functional characteristics of both human created two-dimensional (2D) sketches and sketches generated by a deep learning generative model. In addition, a computer simulation was used to test the capability of the sketched ideas to perform their intended function and explore the validity of participants' responses. The results reveal that both participants and computer simulation evaluations were in agreement, indicating that sketches generated via the deep generative design model were more likely to perform their intended function, compared to human created sketches used to train the model. The results also reveal that participants were subject to biases while evaluating the sketches, and their age and domain knowledge were positively correlated with their perceived functionality of sketches. The results provide evidence that supports the capabilities of deep learning generative design tools to generate functional ideas and their potential to assist designers in creative tasks such as ideation.



Author(s):  
Christian Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the perceived visual and functional characteristics of computer generated sketches, compared to human created sketches. In addition, this work explores the possible biases that humans may have towards the perceived functionality of computer generated sketches. Recent advancements in deep generative design methods have allowed designers to implement computational tools to automatically generate large pools of new design ideas. However, if computational tools are to co-create ideas and solutions alongside designers, their ability to generate not only novel but also functional ideas, needs to be explored. Moreover, since decision-makers need to select those creative ideas for further development to ensure innovation, their possible biases towards computer generated ideas need to be explored. In this study, 619 human participants were recruited to analyze the perceived visual and functional characteristics of 50 human created 2D sketches, and 50 2D sketches generated by a deep learning generative model (i.e., computer generated). The results indicate that participants perceived the computer generated sketches as more functional than the human generated sketches. This perceived functionality was not biased by the presence of labels that explicitly presented the sketches as either human or computer generated. Moreover, the results reveal that participants were not able to classify the 2D sketches as human or computer generated with accuracies greater than random chance. The results provide evidence that supports the capabilities of deep learning generative design tools and their potential to assist designers in creative tasks such as ideation.



2017 ◽  
pp. 1-18 ◽  
Keyword(s):  


2016 ◽  
Vol 35 (5) ◽  
pp. 89-100 ◽  
Author(s):  
Xuekun Guo ◽  
Juncong Lin ◽  
Kai Xu ◽  
Siddhartha Chaudhuri ◽  
Xiaogang Jin
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document