scholarly journals SpaceSheets: Design Experimentation in Latent Space

2021 ◽  
Author(s):  
◽  
Bryan Loh

<p>Computational design tools enable designers to construct and manipulate representations of design artifacts to arrive at a solution. However, the constraints of deterministic programming impose a high cost of tedium and inflexibility to exploring design alternatives through these models. They require designers to express high-level design intent through sequences of low-level operations. Generative neural networks are able to construct generalised models of images which capture principles implicit within them. The latent spaces of these models can be sampled to create novel images and to perform semantic operations. This presents the opportunity for more meaningful and efficient design experimentation, where designers are able to express design intent through principles inferred by the model, instead of sequences of low-level operations.   A general purpose software prototype has been devised and evaluated to investigate the affordances of such a tool. This software — termed a SpaceSheet — takes the form of a spreadsheet interface and enables users to explore a latent space of fonts. User testing and observation of task-based evaluations revealed that the tool enabled a novel top-down approach to design experimentation. This mode of working required a new set of skills for users to derive meaning and navigate within the model effectively. Despite this, a rudimentary understanding was observed to be sufficient to enable designers and non-designers alike to explore design possibilities more effectively.</p>

2021 ◽  
Author(s):  
◽  
Bryan Loh

<p>Computational design tools enable designers to construct and manipulate representations of design artifacts to arrive at a solution. However, the constraints of deterministic programming impose a high cost of tedium and inflexibility to exploring design alternatives through these models. They require designers to express high-level design intent through sequences of low-level operations. Generative neural networks are able to construct generalised models of images which capture principles implicit within them. The latent spaces of these models can be sampled to create novel images and to perform semantic operations. This presents the opportunity for more meaningful and efficient design experimentation, where designers are able to express design intent through principles inferred by the model, instead of sequences of low-level operations.   A general purpose software prototype has been devised and evaluated to investigate the affordances of such a tool. This software — termed a SpaceSheet — takes the form of a spreadsheet interface and enables users to explore a latent space of fonts. User testing and observation of task-based evaluations revealed that the tool enabled a novel top-down approach to design experimentation. This mode of working required a new set of skills for users to derive meaning and navigate within the model effectively. Despite this, a rudimentary understanding was observed to be sufficient to enable designers and non-designers alike to explore design possibilities more effectively.</p>


1963 ◽  
Vol 67 (635) ◽  
pp. 706-710 ◽  
Author(s):  
I. C. Taig

The emergence of electronic computers has brought about a revolution in the analysis of aircraft structures in the past decade which is now generally accepted throughout the Industry. But the major impact of computers on design has, in my opinion, not yet been realised and I hope, in this short paper, to show how powerful a tool they can become in the hands of imaginative design engineers. The direct benefits to the designer lie in the ability of the computers to perform large amounts of routine arithmetic in a short space of time. This capability enables us to obtain an insight into structural behaviour and to consider the influence of the aircraft environment and design alternatives from the earliest stages of design. By the development of general purpose computer programmes for handling routine calculations the structural designer can be freed from much of the laborious calculation associated with complex structures and he is able to devote more time to creative work. An even more important indirect benefit is the basic re-thinking of design processes which is necessary in order to reduce them to routine arithmetic. We begin to realise that many aspects of efficient design such as continuity, cost and weight can be introduced as quantitative parameters instead of relying on intuitive compromise.


2020 ◽  
Vol 39 (10-11) ◽  
pp. 1259-1278
Author(s):  
Ryan C Julian ◽  
Eric Heiden ◽  
Zhanpeng He ◽  
Hejia Zhang ◽  
Stefan Schaal ◽  
...  

We present a strategy for simulation-to-real transfer, which builds on recent advances in robot skill decomposition. Rather than focusing on minimizing the simulation–reality gap, we propose a method for increasing the sample efficiency and robustness of existing simulation-to-real approaches which exploits hierarchy and online adaptation. Instead of learning a unique policy for each desired robotic task, we learn a diverse set of skills and their variations, and embed those skill variations in a continuously parameterized space. We then interpolate, search, and plan in this space to find a transferable policy which solves more complex, high-level tasks by combining low-level skills and their variations. In this work, we first characterize the behavior of this learned skill space, by experimenting with several techniques for composing pre-learned latent skills. We then discuss an algorithm which allows our method to perform long-horizon tasks never seen in simulation, by intelligently sequencing short-horizon latent skills. Our algorithm adapts to unseen tasks online by repeatedly choosing new skills from the latent space, using live sensor data and simulation to predict which latent skill will perform best next in the real world. Importantly, our method learns to control a real robot in joint-space to achieve these high-level tasks with little or no on-robot time, despite the fact that the low-level policies may not be perfectly transferable from simulation to real, and that the low-level skills were not trained on any examples of high-level tasks. In addition to our results indicating a lower sample complexity for families of tasks, we believe that our method provides a promising template for combining learning-based methods with proven classical robotics algorithms such as model-predictive control.


2020 ◽  
Author(s):  
Bryan Loh ◽  
Tom White

Generative models capture properties and relationships of images in a generic vector space representation called a latent space. Latent spaces can be sampled to create novel images and perform semantic operations consistent with the principles inferred from the training set. Designers can use representations learned by generative models to express design intent enabling more effective design experimentation. We present the SpaceSheet, a general-purpose spreadsheet interface designed to support the experimentation and exploration of latent spaces.


2016 ◽  
Vol 24 (1) ◽  
pp. 113-141 ◽  
Author(s):  
John H. Drake ◽  
Ender Özcan ◽  
Edmund K. Burke

Hyper-heuristics are high-level methodologies for solving complex problems that operate on a search space of heuristics. In a selection hyper-heuristic framework, a heuristic is chosen from an existing set of low-level heuristics and applied to the current solution to produce a new solution at each point in the search. The use of crossover low-level heuristics is possible in an increasing number of general-purpose hyper-heuristic tools such as HyFlex and Hyperion. However, little work has been undertaken to assess how best to utilise it. Since a single-point search hyper-heuristic operates on a single candidate solution, and two candidate solutions are required for crossover, a mechanism is required to control the choice of the other solution. The frameworks we propose maintain a list of potential solutions for use in crossover. We investigate the use of such lists at two conceptual levels. First, crossover is controlled at the hyper-heuristic level where no problem-specific information is required. Second, it is controlled at the problem domain level where problem-specific information is used to produce good-quality solutions to use in crossover. A number of selection hyper-heuristics are compared using these frameworks over three benchmark libraries with varying properties for an NP-hard optimisation problem: the multidimensional 0-1 knapsack problem. It is shown that allowing crossover to be managed at the domain level outperforms managing crossover at the hyper-heuristic level in this problem domain.


Electronics ◽  
2021 ◽  
Vol 10 (8) ◽  
pp. 926
Author(s):  
Elyas Zamiri ◽  
Alberto Sanchez ◽  
Marina Yushkova ◽  
Maria Sofia Martínez-García ◽  
Angel de Castro

This paper aims to compare different design alternatives of hardware-in-the-loop (HIL) for emulating power converters in Field Programmable Gate Arrays (FPGAs). It proposes various numerical formats (fixed and floating-point) and different approaches (pure VHSIC Hardware Description Language (VHDL), Intellectual Properties (IPs), automated MATLAB HDL code, and High-Level Synthesis (HLS)) to design power converters. Although the proposed models are simple power electronics HIL systems, the idea can be extended to any HIL system. This study compares the design effort of different coding methods and numerical formats considering possible synthesis tools (Precision and Vivado), and it comprises an analytical discussion in terms of area and speed. The different models are synthesized as ad-hoc modules in general-purpose FPGAs, but also using the NI myRIO device as an example of a commercial tool capable of implementing HIL models. The comparison confirms that the optimum design alternative must be chosen based on the application (complexity, frequency, etc.) and designers’ constraints, such as available area, coding expertise, and design effort.


2020 ◽  
Author(s):  
Bryan Loh ◽  
Tom White

Generative models capture properties and relationships of images in a generic vector space representation called a latent space. Latent spaces can be sampled to create novel images and perform semantic operations consistent with the principles inferred from the training set. Designers can use representations learned by generative models to express design intent enabling more effective design experimentation. We present the SpaceSheet, a general-purpose spreadsheet interface designed to support the experimentation and exploration of latent spaces.


2019 ◽  
Vol 1 (1) ◽  
pp. 31-39
Author(s):  
Ilham Safitra Damanik ◽  
Sundari Retno Andani ◽  
Dedi Sehendro

Milk is an important intake to meet nutritional needs. Both consumed by children, and adults. Indonesia has many producers of fresh milk, but it is not sufficient for national milk needs. Data mining is a science in the field of computers that is widely used in research. one of the data mining techniques is Clustering. Clustering is a method by grouping data. The Clustering method will be more optimal if you use a lot of data. Data to be used are provincial data in Indonesia from 2000 to 2017 obtained from the Central Statistics Agency. The results of this study are in Clusters based on 2 milk-producing groups, namely high-dairy producers and low-milk producing regions. From 27 data on fresh milk production in Indonesia, two high-level provinces can be obtained, namely: West Java and East Java. And 25 others were added in 7 provinces which did not follow the calculation of the K-Means Clustering Algorithm, including in the low level cluster.


Author(s):  
Margarita Khomyakova

The author analyzes definitions of the concepts of determinants of crime given by various scientists and offers her definition. In this study, determinants of crime are understood as a set of its causes, the circumstances that contribute committing them, as well as the dynamics of crime. It is noted that the Russian legislator in Article 244 of the Criminal Code defines the object of this criminal assault as public morality. Despite the use of evaluative concepts both in the disposition of this norm and in determining the specific object of a given crime, the position of criminologists is unequivocal: crimes of this kind are immoral and are in irreconcilable conflict with generally accepted moral and legal norms. In the paper, some views are considered with regard to making value judgments which could hardly apply to legal norms. According to the author, the reasons for abuse of the bodies of the dead include economic problems of the subject of a crime, a low level of culture and legal awareness; this list is not exhaustive. The main circumstances that contribute committing abuse of the bodies of the dead and their burial places are the following: low income and unemployment, low level of criminological prevention, poor maintenance and protection of medical institutions and cemeteries due to underperformance of state and municipal bodies. The list of circumstances is also open-ended. Due to some factors, including a high level of latency, it is not possible to reflect the dynamics of such crimes objectively. At the same time, identification of the determinants of abuse of the bodies of the dead will reduce the number of such crimes.


2021 ◽  
pp. 002224372199837
Author(s):  
Walter Herzog ◽  
Johannes D. Hattula ◽  
Darren W. Dahl

This research explores how marketing managers can avoid the so-called false consensus effect—the egocentric tendency to project personal preferences onto consumers. Two pilot studies were conducted to provide evidence for the managerial importance of this research question and to explore how marketing managers attempt to avoid false consensus effects in practice. The results suggest that the debiasing tactic most frequently used by marketers is to suppress their personal preferences when predicting consumer preferences. Four subsequent studies show that, ironically, this debiasing tactic can backfire and increase managers’ susceptibility to the false consensus effect. Specifically, the results suggest that these backfire effects are most likely to occur for managers with a low level of preference certainty. In contrast, the results imply that preference suppression does not backfire but instead decreases false consensus effects for managers with a high level of preference certainty. Finally, the studies explore the mechanism behind these results and show how managers can ultimately avoid false consensus effects—regardless of their level of preference certainty and without risking backfire effects.


Sign in / Sign up

Export Citation Format

Share Document