Exploring Biases Between Human and Machine Generated Designs

2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Christian E. Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the possible biases that individuals may have toward the perceived functionality of machine generated designs, compared to human created designs. Toward this end, 1187 participants were recruited via Amazon mechanical Turk (AMT) to analyze the perceived functional characteristics of both human created two-dimensional (2D) sketches and sketches generated by a deep learning generative model. In addition, a computer simulation was used to test the capability of the sketched ideas to perform their intended function and explore the validity of participants' responses. The results reveal that both participants and computer simulation evaluations were in agreement, indicating that sketches generated via the deep generative design model were more likely to perform their intended function, compared to human created sketches used to train the model. The results also reveal that participants were subject to biases while evaluating the sketches, and their age and domain knowledge were positively correlated with their perceived functionality of sketches. The results provide evidence that supports the capabilities of deep learning generative design tools to generate functional ideas and their potential to assist designers in creative tasks such as ideation.

Author(s):  
Christian Lopez ◽  
Scarlett R. Miller ◽  
Conrad S. Tucker

The objective of this work is to explore the perceived visual and functional characteristics of computer generated sketches, compared to human created sketches. In addition, this work explores the possible biases that humans may have towards the perceived functionality of computer generated sketches. Recent advancements in deep generative design methods have allowed designers to implement computational tools to automatically generate large pools of new design ideas. However, if computational tools are to co-create ideas and solutions alongside designers, their ability to generate not only novel but also functional ideas, needs to be explored. Moreover, since decision-makers need to select those creative ideas for further development to ensure innovation, their possible biases towards computer generated ideas need to be explored. In this study, 619 human participants were recruited to analyze the perceived visual and functional characteristics of 50 human created 2D sketches, and 50 2D sketches generated by a deep learning generative model (i.e., computer generated). The results indicate that participants perceived the computer generated sketches as more functional than the human generated sketches. This perceived functionality was not biased by the presence of labels that explicitly presented the sketches as either human or computer generated. Moreover, the results reveal that participants were not able to classify the 2D sketches as human or computer generated with accuracies greater than random chance. The results provide evidence that supports the capabilities of deep learning generative design tools and their potential to assist designers in creative tasks such as ideation.


2021 ◽  
Vol 74 ◽  
pp. 101728
Author(s):  
Carolyn M. Ritchey ◽  
Toshikazu Kuroda ◽  
Jillian M. Rung ◽  
Christopher A. Podlesnik

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maiki Higa ◽  
Shinya Tanahara ◽  
Yoshitaka Adachi ◽  
Natsumi Ishiki ◽  
Shin Nakama ◽  
...  

AbstractIn this report, we propose a deep learning technique for high-accuracy estimation of the intensity class of a typhoon from a single satellite image, by incorporating meteorological domain knowledge. By using the Visual Geometric Group’s model, VGG-16, with images preprocessed with fisheye distortion, which enhances a typhoon’s eye, eyewall, and cloud distribution, we achieved much higher classification accuracy than that of a previous study, even with sequential-split validation. Through comparison of t-distributed stochastic neighbor embedding (t-SNE) plots for the feature maps of VGG with the original satellite images, we also verified that the fisheye preprocessing facilitated cluster formation, suggesting that our model could successfully extract image features related to the typhoon intensity class. Moreover, gradient-weighted class activation mapping (Grad-CAM) was applied to highlight the eye and the cloud distributions surrounding the eye, which are important regions for intensity classification; the results suggest that our model qualitatively gained a viewpoint similar to that of domain experts. A series of analyses revealed that the data-driven approach using only deep learning has limitations, and the integration of domain knowledge could bring new breakthroughs.


Author(s):  
Soyoung Yoo ◽  
Sunghee Lee ◽  
Seongsin Kim ◽  
Kwang Hyeon Hwang ◽  
Jong Ho Park ◽  
...  

2011 ◽  
Vol 37 (2) ◽  
pp. 413-420 ◽  
Author(s):  
Karën Fort ◽  
Gilles Adda ◽  
K. Bretonnel Cohen

2015 ◽  
Vol 16 (S1) ◽  
Author(s):  
John WG Seamons ◽  
Marconi S Barbosa ◽  
Jonathan D Victor ◽  
Dominique Coy ◽  
Ted Maddess

Author(s):  
F. Jurčíček ◽  
S. Keizer ◽  
Milica Gašić ◽  
François Mairesse ◽  
B. Thomson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document