scholarly journals The Way I Paint—How Image Composition Emerges During the Creation of Abstract Artworks

i-Perception ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 204166952092509
Author(s):  
Christoph Redies

In recent years, there has been an increasing number of studies on objective image properties in visual artworks. Little is known, however, about how these image properties emerge while artists create their artworks. In order to study this matter, I produced five colored abstract artworks by myself and recorded state images at all stages of their creation. For each image, I then calculated low-level features from deep neural networks, which served as a model of responses properties in visual cortex. Two-dimensional plots of variances that were derived from these features showed that the drawings differ greatly at early stages of their creation, but then follow a narrow common path to terminate at or close to a position where traditional paintings cluster in the plots. Whether other artists use similar perceptive strategies while they create artworks remains to be studied.

Author(s):  
Gary Smith ◽  
Jay Cordes

Computer software, particularly deep neural networks and Monte Carlo simulations, are extremely useful for the specific tasks that they have been designed to do, and they will get even better, much better. However, we should not assume that computers are smarter than us just because they can tell us the first 2000 digits of pi or show us a street map of every city in the world. One of the paradoxical things about computers is that they can excel at things that humans consider difficult (like calculating square roots) while failing at things that humans consider easy (like recognizing stop signs). They can’t pass simple tests like the Winograd Schema Challenge because they do not understand the world the way humans do. They have neither common sense nor wisdom. They are our tools, not our masters.


2018 ◽  
Vol 1085 ◽  
pp. 042034 ◽  
Author(s):  
Wahid Bhimji ◽  
Steven Andrew Farrell ◽  
Thorsten Kurth ◽  
Michela Paganini ◽  
Prabhat ◽  
...  

2019 ◽  
Vol 166 (6) ◽  
pp. A886-A896 ◽  
Author(s):  
Neal Dawson-Elli ◽  
Suryanarayana Kolluri ◽  
Kishalay Mitra ◽  
Venkat R. Subramanian

2020 ◽  
Vol 20 (11) ◽  
pp. 556
Author(s):  
Jeffrey Wammes ◽  
Kailong Peng ◽  
Kenneth Norman ◽  
Nicholas Turk-Browne

2017 ◽  
Vol 1 (3) ◽  
pp. 83 ◽  
Author(s):  
Chandrasegar Thirumalai ◽  
Ravisankar Koppuravuri

In this paper, we will use deep neural networks for predicting the bike sharing usage based on previous years usage data. We will use because deep neural nets for getting higher accuracy. Deep neural nets are quite different from other machine learning techniques; here we can add many numbers of hidden layers to improve the accuracy of our prediction and the model can be trained in the way we want such that we can achieve the results we want. Nowadays many AI experts will say that deep learning is the best AI technique available now and we can achieve some unbelievable results using this technique. Now we will use that technique to predict bike sharing usage of a rental company to make sure they can take good business decisions based on previous years data.


The Analyst ◽  
2018 ◽  
Vol 143 (22) ◽  
pp. 5380-5387 ◽  
Author(s):  
DaeHan Ahn ◽  
JiYeong Lee ◽  
SangJun Moon ◽  
Taejoon Park

In-line holographic microscopes paved the way for realizing portable cell counting systems using deep neural networks.


2017 ◽  
Author(s):  
B. B. Bankson ◽  
M.N. Hebart ◽  
I.I.A. Groen ◽  
C.I. Baker

AbstractVisual object representations are commonly thought to emerge rapidly, yet it has remained unclear to what extent early brain responses reflect purely low-level visual features of these objects and how strongly those features contribute to later categorical or conceptual representations. Here, we aimed to estimate a lower temporal bound for the emergence of conceptual representations by defining two criteria that characterize such representations: 1) conceptual object representations should generalize across different exemplars of the same object, and 2) these representations should reflect high-level behavioral judgments. To test these criteria, we compared magnetoencephalography (MEG) recordings between two groups of participants (n = 16 per group) exposed to different exemplar images of the same object concepts. Further, we disentangled low-level from high-level MEG responses by estimating the unique and shared contribution of models of behavioral judgments, semantics, and different layers of deep neural networks of visual object processing. We find that 1) both generalization across exemplars as well as generalization of object-related signals across time increase after 150 ms, peaking around 230 ms; 2) behavioral judgments explain the most unique variance in the response after 150 ms. Collectively, these results suggest a lower bound for the emergence of conceptual object representations around 150 ms following stimulus onset.


2019 ◽  
Author(s):  
Georgin Jacob ◽  
R. T. Pramod ◽  
Harish Katti ◽  
S. P. Arun

ABSTRACTDeep neural networks have revolutionized computer vision, and their object representations match coarsely with the brain. As a result, it is widely believed that any fine scale differences between deep networks and brains can be fixed with increased training data or minor changes in architecture. But what if there are qualitative differences between brains and deep networks? Do deep networks even see the way we do? To answer this question, we chose a deep neural network optimized for object recognition and asked whether it exhibits well-known perceptual and neural phenomena despite not being explicitly trained to do so. To our surprise, many phenomena were present in the network, including the Thatcher effect, mirror confusion, Weber’s law, relative size, multiple object normalization and sparse coding along multiple dimensions. However, some perceptual phenomena were notably absent, including processing of 3D shape, patterns on surfaces, occlusion, natural parts and a global advantage. Our results elucidate the computational challenges of vision by showing that learning to recognize objects suffices to produce some perceptual phenomena but not others and reveal the perceptual properties that could be incorporated into deep networks to improve their performance.


Sign in / Sign up

Export Citation Format

Share Document