A Novel Artistic Image Generation Technique: Making Relief Effects Through Evolution

Author(s):  
Jingsong He ◽  
Boping Shi ◽  
Mingguo Liu
2001 ◽  
Vol 01 (04) ◽  
pp. 565-574 ◽  
Author(s):  
WENYIN LIU ◽  
XIN TONG ◽  
YINGQING XU ◽  
HARRY SHUM ◽  
HUA ZHONG

This paper describes a novel technique (deviation mapping) that generates images with artistic appearances, e.g. spray painted wall, embossment, cashmere painting, etc. Our technique employs a deviation map constructed from a single background image in the image generation process. Instead of recovering the exact geometry from the background image, the deviation map can be regarded as a virtual surface. This virtual surface is then painted with a foreground image and illuminated to generate the final result. Interestingly, the synthesized images exhibit some artistic appearances. Our method is very fast and very simple to implement.


Author(s):  
GARY R. GREENFIELD

Perhaps the best known example of user-guided evolution is furnished by evolving expressions, an image generation technique first introduced by Sims. In this version of artificial evolution, images are evolved for aesthetic purposes, hence any fitness measure used must be based on aesthetics. We consider the problem of guiding image evolution autonomously on the basis of computational, as opposed to user-assigned, aesthetic fitness. Due to the difficulty of formulating an absolute criterion for aesthetic fitness, we adopt a coevolutionary approach, relying on hosts and parasites to establish relative criteria for aesthetic fitness. To sustain the coevolutionary arms race, we allow coevolution to proceed in stages. This permits appropriate fitness levels to be maintained within the parasite populations we use to infect host image populations. Using staged coevolution produces two beneficial results: (1) longer survival times for subpopulations of host images, and (2) stable phenotypic lineages for host images.


Author(s):  
Shin-ichi Ito ◽  
Momoyo Ito ◽  
Minoru Fukumi

This paper proposes a method to classify words in Japanese Sign Language (JSL). This approach employs a combined gathered image generation technique and a neural network with convolutional and pooling layers (CNNs). The gathered image generation generates images based on mean images. Herein, the maximum difference value is between blocks of mean and JSL motions images. The gathered images comprise blocks that having the calculated maximum difference value. CNNs extract the features of the gathered images, while a support vector machine for multi-class classification, and a multilayer perceptron are employed to classify 20 JSL words. The experimental results had 94.1% for the mean recognition accuracy of the proposed method. These results suggest that the proposed method can obtain information to classify the sample words.


2019 ◽  
Author(s):  
K Herdinai ◽  
S Urbán ◽  
Z Besenyi ◽  
L Pávics ◽  
N Zsótér ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document