From Swarm Art Toward Ecosystem Art

2012 ◽  
Vol 3 (3) ◽  
pp. 1-18 ◽  
Author(s):  
Stefan Bornhofen ◽  
Vincent Gardeux ◽  
Andréa Machizaud

Swarm intelligence deals with the study of collectively intelligent behavior that emerges from a decentralized system of non-intelligent individual agents. The concept is widely used in the fields of simulation, optimization or robotics, but less known in the domain of generative art. This paper presents the swarm paradigm in the context of artistic creation, and more particularly explores the interest of enhancing swarm models with dynamics inspired from natural ecosystems. The authors introduce an energy budget to the agents of a swarm system, and show how mapping the energy level to visual information such as line width or color, combined with mechanisms such as resource chasing and consumption, enriches the search space of possible images. Moreover, the authors highlight that the approach allows the user to partially control the creation process of the drawings. The authors argue that the exploration of ecosystem dynamics in generative systems may open up novel artistic opportunities and shift the perspective from swarm art toward ecosystem art.

2019 ◽  
Vol 7 (4) ◽  
pp. 43-49
Author(s):  
Katyaa Nakova-Tahchieva

The present work is part of a research paper for which I extend my heartfelt thanks to Assoc. Prof. Dr. Valko Kanev. It examines some specifics of the artistic creation process that lead to the creation of one of the types of written student texts - the narration. Its variations - "narration by imagination" and "narration by set supports" are regulated in the new fifth grade curriculum. The requirements for writing a narration and the exemplary thematic curriculum of optional literature classes in the 7-tgrade have proven to be applicable in the literature education process.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1636
Author(s):  
Noé Ortega-Sánchez ◽  
Diego Oliva ◽  
Erik Cuevas ◽  
Marco Pérez-Cisneros ◽  
Angel A. Juan

The techniques of halftoning are widely used in marketing because they reduce the cost of impression and maintain the quality of graphics. Halftoning converts a digital image into a binary image conformed by dots. The output of the halftoning contains less visual information; a possible benefit of this task is the reduction of ink when graphics are printed. The human eye is not able to detect the absence of information, but the printed image stills have good quality. The most used method for halftoning is called Floyd-Steinberger, and it defines a specific matrix for the halftoning conversion. However, most of the proposed techniques in halftoning use predefined kernels that do not permit adaptation to different images. This article introduces the use of the harmony search algorithm (HSA) for halftoning. The HSA is a popular evolutionary algorithm inspired by the musical improvisation. The different operators of the HSA permit an efficient exploration of the search space. The HSA is applied to find the best configuration of the kernel in halftoning; meanwhile, as an objective function, the use of the structural similarity index (SSIM) is proposed. A set of rules are also introduced to reduce the regular patterns that could be created by non-appropriate kernels. The SSIM is used due to the fact that it is a perception model used as a metric that permits comparing images to interpret the differences between them numerically. The aim of combining the HSA with the SSIM for halftoning is to generate an adaptive method that permits estimating the best kernel for each image based on its intrinsic attributes. The graphical quality of the proposed algorithm has been compared with classical halftoning methodologies. Experimental results and comparisons provide evidence regarding the quality of the images obtained by the proposed optimization-based approach. In this context, classical algorithms have a lower graphical quality in comparison with our proposal. The results have been validated by a statistical analysis based on independent experiments over the set of benchmark images by using the mean and standard deviation.


2017 ◽  
Vol 262 (2) ◽  
pp. 673-681 ◽  
Author(s):  
Rafael de Carvalho Miranda ◽  
José Arnaldo Barra Montevechi ◽  
Aneirson Francisco da Silva ◽  
Fernando Augusto Silva Marins

2021 ◽  
Author(s):  
Bruce C. Hansen ◽  
Michelle R. Greene ◽  
David J. Field

AbstractA chief goal of systems neuroscience is to understand how the brain encodes information in our visual environments. Understanding that neural code is crucial to explaining how visual content is transformed via subsequent semantic representations to enable intelligent behavior. Although the visual code is not static, this reality is often obscured in voxel-wise encoding models of BOLD signals due to fMRI’s poor temporal resolution. We leveraged the high temporal resolution of EEG to develop an encoding technique based in state-space theory. This approach maps neural signals to each pixel within a given image and reveals location-specific transformations of the visual code, providing a spatiotemporal signature for the image at each electrode. This technique offers a spatiotemporal visualization of the evolution of the neural code of visual information thought impossible to obtain from EEG and promises to provide insight into how visual meaning is developed through dynamic feedforward and recurrent processes.


2018 ◽  
Author(s):  
Jianfu Wang

Distributed Ledger Technology (DLT) creates a decentralized system for trust and transaction validation using executable smart contracts to update information across a distributed database. This type of ecosystem can be applied to Commodity Trade Finance to alleviate critical issues of information asymmetry and the cost of transacting which are the leading causes of the Trade Finance Gap (ie. the lack of supply of capital to meet total trade finance demand). The possibility of scaling up such ecosystems with a number of Institutional Investors and micro small medium enterprises (MSME) would be advantageous, however, it brings up its own set of challenges including the stability of the system design. Agent-based modeling (ABM) is a powerful method to assess the financial ecosystem dynamics. DLT ecosystems model well under ABM, as the agents present a clearly defined taxonomy. In this study, we use ABM to assess the Aquifer Institute Platform - a DLT-based Commodity Trade Finance system, in which a growing number of participating parties is closely related to the circulation of utility tokens and transaction flows. We study the system dynamics of the platform and propose an appropriate setup for different transaction loads.


2018 ◽  
Vol 38 (3) ◽  
Author(s):  
Catalin Brylla

Mainstream narratives depicting blind people who create visual art have repeatedly used the supercrip trope. For a seeing audience this trope highlights an artist's extraordinary skill and perseverance in creating aesthetic artefacts despite lacking – what is presumed to be – the essential sensory input of sight. This type of representation fails to portray the diversity and complexity of individual character traits but conveniently places blindness at the story's center; this turns the artistic process into a simplistic manifestation of 'abnormality' and 'otherness'. My own documentary practice explores filmic strategies that bypass the supercrip trope by emphasizing the 'everydayness' of the artistic creation process. The aim is for a seeing audience to experience the creation process as an ordinary, everyday act – amongst many others – in which blindness is neither foregrounded nor 'backgrounded'. This is illustrated through discussion of my documentary The Terry Fragments (2018), a film that represents a blind artist's painting process through narrative fragments and the depiction of improvisation and failure. These strategies evoke the multi-layered and heterarchical plurality of everydayness, which potentially resists the formation of the supercrip trope. This method can be applied to a variety of disability contexts that are prone to perpetuating the supercrip stereotype.


Author(s):  
Alex A. Freitas ◽  
Rafael S. Parpinelli ◽  
Heitor S. Lopes

Ant colony optimization (ACO) is a relatively new computational intelligence paradigm inspired by the behavior of natural ants (Dorigo & Stutzle, 2004). Ants often find the shortest path between a food source and the nest of the colony without using visual information. In order to exchange information about which path should be followed, ants communicate with each other by means of a chemical substance called pheromone. As ants move, a certain amount of pheromone is dropped on the ground, creating a pheromone trail. The more ants that follow a given trail, the more attractive that trail becomes to be followed by other ants. This process involves a loop of positive feedback, in which the probability that an ant chooses a path is proportional to the number of ants that have already passed by that path. Hence, individual ants, following very simple rules, interact to produce an intelligent behavior at the higher level of the ant colony. In other words, intelligence is an emergent phenomenon. In this article we present an overview of Ant-Miner, an ACO algorithm for discovering classification rules in data mining (Parpinelli, Lopes, & Freitas, 2002a, 2002b), as well as a review of several Ant-Miner variations and related ACO algorithms. All the algorithms reviewed in this article address the classification task of data mining. In this task each case (record) of the data being mined consists of two parts: a goal attribute, whose value is to be predicted, and a set of predictor attributes. The aim is to predict the value of the goal attribute for a case, given the values of the predictor attributes for that case (Fayyad, Piatetsky-Shapiro, & Smyth, 1996).


Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1284
Author(s):  
Diogo de Andrade ◽  
Nuno Fachada ◽  
Carlos M. Fernandes ◽  
Agostinho C. Rosa

We present a generative swarm art project that creates 3D animations by running a Particle Swarm Optimization algorithm over synthetic landscapes produced by an objective function. Different kinds of functions are explored, including mathematical expressions, Perlin noise-based terrain, and several image-based procedures. A method for displaying the particle swarm exploring the search space in aesthetically pleasing ways is described. Several experiments are detailed and analyzed and a number of interesting visual artifacts are highlighted.


2018 ◽  
Vol 8 (11) ◽  
pp. 2153 ◽  
Author(s):  
Shih-Cheng Horng ◽  
Shieh-Shing Lin

Probabilistic constrained simulation optimization problems (PCSOP) are concerned with allocating limited resources to achieve a stochastic objective function subject to a probabilistic inequality constraint. The PCSOP are NP-hard problems whose goal is to find optimal solutions using simulation in a large search space. An efficient “Ordinal Optimization (OO)” theory has been utilized to solve NP-hard problems for determining an outstanding solution in a reasonable amount of time. OO theory to solve NP-hard problems is an effective method, but the probabilistic inequality constraint will greatly decrease the effectiveness and efficiency. In this work, a method that embeds ordinal optimization (OO) into tree–seed algorithm (TSA) (OOTSA) is firstly proposed for solving the PCSOP. The OOTSA method consists of three modules: surrogate model, exploration and exploitation. Then, the proposed OOTSA approach is applied to minimize the expected lead time of semi-finished products in a pull-type production system, which is formulated as a PCSOP that comprises a well-defined search space. Test results obtained by the OOTSA are compared with the results obtained by three heuristic approaches. Simulation results demonstrate that the OOTSA method yields an outstanding solution of much higher computing efficiency with much higher quality than three heuristic approaches.


Sign in / Sign up

Export Citation Format

Share Document