scholarly journals The Zombification of Art History

2019 ◽  
Vol 11 (2) ◽  
pp. 28-35
Author(s):  
Tsila Hassine ◽  
Ziv Neeman

In the past few years deep-learning AI neural networks have achieved major milestones in artistic image analysis and generation, producing what some refer to as ‘art.’ We reflect critically on some of the artistic shortcomings of a few projects that occupied the spotlight in recent years. We introduce the term ‘Zombie Art’ to describe the generation of new images of dead masters, as well as ‘The AI Reproducibility Test.’ We designate the problems inherent in AI and in its application to art history. In conclusion, we propose new directions for both AI-generated art and art history, in the light of these new powerful AI technologies of artistic image analysis and generation.

Author(s):  
Ruofan Liao ◽  
Paravee Maneejuk ◽  
Songsak Sriboonchitta

In the past, in many areas, the best prediction models were linear and nonlinear parametric models. In the last decade, in many application areas, deep learning has shown to lead to more accurate predictions than the parametric models. Deep learning-based predictions are reasonably accurate, but not perfect. How can we achieve better accuracy? To achieve this objective, we propose to combine neural networks with parametric model: namely, to train neural networks not on the original data, but on the differences between the actual data and the predictions of the parametric model. On the example of predicting currency exchange rate, we show that this idea indeed leads to more accurate predictions.


Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

For the past few years, deep learning (DL) robustness (i.e. the ability to maintain the same decision when inputs are subject to perturbations) has become a question of paramount importance, in particular in settings where misclassification can have dramatic consequences. To address this question, authors have proposed different approaches, such as adding regularizers or training using noisy examples. In this paper we introduce a regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture. This regularizer penalizes large changes (across consecutive layers in the architecture) in the distance between examples of different classes, and as such enforces smooth variations of the class boundaries. We provide theoretical justification for this regularizer and demonstrate its effectiveness to improve robustness on classical supervised learning vision datasets for various types of perturbations. We also show it can be combined with existing methods to increase overall robustness.


2022 ◽  
pp. 1-27
Author(s):  
Clifford Bohm ◽  
Douglas Kirkpatrick ◽  
Arend Hintze

Abstract Deep learning (primarily using backpropagation) and neuroevolution are the preeminent methods of optimizing artificial neural networks. However, they often create black boxes that are as hard to understand as the natural brains they seek to mimic. Previous work has identified an information-theoretic tool, referred to as R, which allows us to quantify and identify mental representations in artificial cognitive systems. The use of such measures has allowed us to make previous black boxes more transparent. Here we extend R to not only identify where complex computational systems store memory about their environment but also to differentiate between different time points in the past. We show how this extended measure can identify the location of memory related to past experiences in neural networks optimized by deep learning as well as a genetic algorithm.


In late years, critical learning methodologies especially Convolutional Neural Networks have been utilized in different solicitations. CNN's have appeared to be a key capacity to ordinarily expel broad volumes of data from massive information. The uses of CNNs have inside and out ended up being useful especially in orchestrating ordinary pictures. Regardless, there have been essential obstacles in executing the CNNs in a restorative zone as a result of the nonattendance of genuine getting ready data. Consequently, general imaging benchmarks, for instance, Image Net have been conspicuously used in the restorative not too zone notwithstanding the way that they are perfect when appeared differently about the CNNs. In this paper, a comparative examination of LeNet, AlexNet, and GoogLeNet has been done. Starting there, the paper has proposed an improved hypothetical structure for requesting helpful life structures pictures using CNNs. In perspective on the proposed structure of the framework, the CNNs building are required to beat the previous three plans in requesting remedial pictures.


EDIS ◽  
2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Amr Abd-Elrahman ◽  
Katie Britt ◽  
Vance Whitaker

This publication presents a guide to image analysis for researchers and farm managers who use ArcGIS software. Anyone with basic geographic information system analysis skills may follow along with the demonstration and learn to implement the Mask Region Convolutional Neural Networks model, a widely used model for object detection, to delineate strawberry canopies using ArcGIS Pro Image Analyst Extension in a simple workflow. This process is useful for precision agriculture management.


2019 ◽  
Vol 491 (2) ◽  
pp. 2280-2300 ◽  
Author(s):  
Kaushal Sharma ◽  
Ajit Kembhavi ◽  
Aniruddha Kembhavi ◽  
T Sivarani ◽  
Sheelu Abraham ◽  
...  

ABSTRACT Due to the ever-expanding volume of observed spectroscopic data from surveys such as SDSS and LAMOST, it has become important to apply artificial intelligence (AI) techniques for analysing stellar spectra to solve spectral classification and regression problems like the determination of stellar atmospheric parameters Teff, $\rm {\log g}$, and [Fe/H]. We propose an automated approach for the classification of stellar spectra in the optical region using convolutional neural networks (CNNs). Traditional machine learning (ML) methods with ‘shallow’ architecture (usually up to two hidden layers) have been trained for these purposes in the past. However, deep learning methods with a larger number of hidden layers allow the use of finer details in the spectrum which results in improved accuracy and better generalization. Studying finer spectral signatures also enables us to determine accurate differential stellar parameters and find rare objects. We examine various machine and deep learning algorithms like artificial neural networks, Random Forest, and CNN to classify stellar spectra using the Jacoby Atlas, ELODIE, and MILES spectral libraries as training samples. We test the performance of the trained networks on the Indo-U.S. Library of Coudé Feed Stellar Spectra (CFLIB). We show that using CNNs, we are able to lower the error up to 1.23 spectral subclasses as compared to that of two subclasses achieved in the past studies with ML approach. We further apply the trained model to classify stellar spectra retrieved from the SDSS data base with SNR > 20.


Author(s):  
Derya Soydaner

In recent years, we have witnessed the rise of deep learning. Deep neural networks have proved their success in many areas. However, the optimization of these networks has become more difficult as neural networks going deeper and datasets becoming bigger. Therefore, more advanced optimization algorithms have been proposed over the past years. In this study, widely used optimization algorithms for deep learning are examined in detail. To this end, these algorithms called adaptive gradient methods are implemented for both supervised and unsupervised tasks. The behavior of the algorithms during training and results on four image datasets, namely, MNIST, CIFAR-10, Kaggle Flowers and Labeled Faces in the Wild are compared by pointing out their differences against basic optimization algorithms.


2019 ◽  
Author(s):  
Mark Rademaker ◽  
Laurens Hogeweg ◽  
Rutger Vos

AbstractKnowledge of global biodiversity remains limited by geographic and taxonomic sampling biases. The scarcity of species data restricts our understanding of the underlying environmental factors shaping distributions, and the ability to draw comparisons among species. Species distribution models (SDMs) were developed in the early 2000s to address this issue. Although SDMs based on single layered Neural Networks have been experimented with in the past, these performed poorly. However, the past two decades have seen a strong increase in the use of Deep Learning (DL) approaches, such as Deep Neural Networks (DNNs). Despite the large improvement in predictive capacity DNNs provide over shallow networks, to our knowledge these have not yet been applied to SDM. The aim of this research was to provide a proof of concept of a DL-SDM1. We used a pre-existing dataset of the world’s ungulates and abiotic environmental predictors that had recently been used in MaxEnt SDM, to allow for a direct comparison of performance between both methods. Our DL-SDM consisted of a binary classification DNN containing 4 hidden layers and drop-out regularization between each layer. Performance of the DL-SDM was similar to MaxEnt for species with relatively large sample sizes and worse for species with relatively low sample sizes. Increasing the number of occurrences further improved DL-SDM performance for species that already had relatively high sample sizes. We then tried to further improve performance by altering the sampling procedure of negative instances and increasing the number of environmental predictors, including species interactions. This led to a large increase in model performance across the range of sample sizes in the species datasets. We conclude that DL-SDMs provide a suitable alternative to traditional SDMs such as MaxEnt and have the advantage of being both able to directly include species interactions, as well as being able to handle correlated input features. Further improvements to the model would include increasing its scalability by turning it into a multi-classification model, as well as developing a more user friendly DL-SDM Python package.


Author(s):  
Meng Gao ◽  
Yue Wang ◽  
Haipeng Xu ◽  
Congcong Xu ◽  
Xianhong Yang ◽  
...  

Since the results of basic and specific classification in male androgenic alopecia are subjective, and trichoscopic data, such as hair density and diameter distribution, are potential quantitative indicators, the aim of this study was to develop a deep learning framework for automatic trichoscopic image analysis and a quantitative model for predicting basic and specific classification in male androgenic alopecia. A total of 2,910 trichoscopic images were collected and a deep learning framework was created on convolutional neural networks. Based on the trichoscopic data provided by the framework, correlations with basic and specific classification were analysed and a quantitative model was developed for predicting basic and specific classification using multiple ordinal logistic regression. The aim of this study was to develop a deep learning framework that can accurately analyse hair density and diameter distribution on trichoscopic images, and a quantitative model for predicting basic and specific classification in male androgenic alopecia with high accuracy.


Sign in / Sign up

Export Citation Format

Share Document