scholarly journals Print-and-Play Fabrication

2021 ◽  
Author(s):  
Carlos E. Tejada

In recent years, it has become increasingly accessible to create interactive applications on screen-based devices. Contrary to this ease, and despite their numerous benefits, creating tangible interactive devices is a task reserved for experts, requiring extensive knowledge on electronics, and manual assemblies. While digital fabrication equipment holds promise to alleviate this situation, the majority of research exploring this avenue still present significant barriers for non-experts, and other-domain experts to construct tangible devices, often requiring assembly of electronic circuits and printed parts, prohibitive fabrication pipelines, or intricate calibration of machine learning models. This thesis introduces Print-and-Play Fabrication: a digital fabrication paradigm where tangible interactive devices are printed, rather than assembled. By embedding interior structures inside three-dimensional models that leverage distinct properties of fluid behavior, this thesis presents a variety of techniques to construct tangible devices that can sense, process, and respond to user’s interactions without requiring assembly of parts, circuits, or calibration of machine learning models. Chapter 2 provides an overview of the fabrication of tangible devices literature through the lens of Print-and-Play Fabrication. This chapter highlights the post-print activities required to enable each of the efforts in the literature, and reflects on the status of the field. Chapters 3 and 4 introduce two novel techniques for constructing tangible devices that can sense user’s interactions. AirTouch uses basic principles of fluid behavior to enable the construction of touch-sensing devices, capable of detecting interactions in up to 12 locations, with an accuracy of up to 98%. Blowhole builds on this concept by employing principles of acoustic resonance to construct tangible devices that can detect where they are gently blown on. Blowhole-enabled devices can enable up to seven interactive locations, with an accuracy of up to 98%. Conversely, in Chapter 6 I introduce a technique to encapsulate logic computation into 3D-printed objects. Inspired by concepts from the Cold War era, I embed structures capable of representing basic logic operations using interacting jets of air into three-dimensional models. AirLogic takes the form of a toolkit, enabling non-expert designers to add a variety of input, logic processing, and output mechanisms to three-dimensional models. Continuing, Chapter 5 describes a toolkit for fabricating objects capable of changing their physical shape using pneumatic actuation. MorpheesPlug introduces a design environment, a set of pneumatically actuated widgets, and a control module that, in tandem, enable non[1]experts to construct devices capable of changing their physical shape in order to provide output. Last, I conclude with reflections on the status of Print-and-Play Fabrication, and possible directions for future work.

Leonardo ◽  
2021 ◽  
pp. 1-8
Author(s):  
Guido Salimbeni ◽  
Frederic Fol Leymarie ◽  
William Latham

Abstract We present a system built to generate arrangements of three-dimensional models for aesthetic evaluation, with the aim to support an artist in their creative process. We explore how this system can automatically generate aesthetically pleasing content for use in the media and design industry, based on standards originally developed in master artworks. We demonstrate the effectiveness of our process in the context of paintings using a collection of images inspired by the work of the artist Giorgio Morandi (Bologna, 1890 -- 1964). Finally, we compare the results of our system with the results of a well-known Generative Adversarial Network (GAN).


2021 ◽  
pp. 097275312199017
Author(s):  
Mahender Kumar Singh ◽  
Krishna Kumar Singh

Background: The noninvasive study of the structure and functions of the brain using neuroimaging techniques is increasingly being used for its clinical and research perspective. The morphological and volumetric changes in several regions and structures of brains are associated with the prognosis of neurological disorders such as Alzheimer’s disease, epilepsy, schizophrenia, etc. and the early identification of such changes can have huge clinical significance. The accurate segmentation of three-dimensional brain magnetic resonance images into tissue types (i.e., grey matter, white matter, cerebrospinal fluid) and brain structures, thus, has huge importance as they can act as early biomarkers. The manual segmentation though considered the “gold standard” is time-consuming, subjective, and not suitable for bigger neuroimaging studies. Several automatic segmentation tools and algorithms have been developed over the years; the machine learning models particularly those using deep convolutional neural network (CNN) architecture are increasingly being applied to improve the accuracy of automatic methods. Purpose: The purpose of the study is to understand the current and emerging state of automatic segmentation tools, their comparison, machine learning models, their reliability, and shortcomings with an intent to focus on the development of improved methods and algorithms. Methods: The study focuses on the review of publicly available neuroimaging tools, their comparison, and emerging machine learning models particularly those based on CNN architecture developed and published during the last five years. Conclusion: Several software tools developed by various research groups and made publicly available for automatic segmentation of the brain show variability in their results in several comparison studies and have not attained the level of reliability required for clinical studies. The machine learning models particularly three dimensional fully convolutional network models can provide a robust and efficient alternative with relation to publicly available tools but perform poorly on unseen datasets. The challenges related to training, computation cost, reproducibility, and validation across distinct scanning modalities for machine learning models need to be addressed.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3303
Author(s):  
Anastasia V. Demidova ◽  
Olga V. Druzhinina ◽  
Olga N. Masina ◽  
Alexey A. Petrov

The problems of synthesis and analysis of multidimensional controlled models of population dynamics are of both theoretical and applied interest. The need to solve numerical optimization problems for such a class of models is associated with the expansion of ecosystem control requirements. The need to solve the problem of stochastization is associated with the emergence of new problems in the study of ecological systems properties under the influence of random factors. The aim of the work is to develop a new approach to studying the properties of population dynamics systems using methods of numerical optimization, stochastization and machine learning. The synthesis problems of nonlinear three-dimensional models of interconnected species number dynamics, taking into account trophic chains and competition in prey populations, are studied. Theorems on the asymptotic stability of equilibrium states are proved. A qualitative and numerical study of the models is carried out. Using computational experiments, the results of an analytical stability and permanent coexistence study are verified. The search for equilibrium states belonging to the stability and permanent coexistence region is made using the developed intelligent algorithm and evolutionary calculations. The transition is made from the model specified by the vector ordinary differential equation to the corresponding stochastic model. A comparative analysis of deterministic and stochastic models with competition and trophic chains is carried out. New effects are revealed that are characteristic of three-dimensional models, taking into account the competition in populations of prey. The formulation of the optimal control problem for a model with competition and trophic chains is proposed. To find optimal trajectories, new generalized algorithms for numerical optimization are developed. A methods for the synthesis of controllers based on the use of artificial neural networks and machine learning are developed. The results on the search for optimal trajectories and generation of control functions are presented.The obtained results can be used in modeling problems of ecological, demographic, socio-economic and chemical kinetics systems.


Nutrients ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 3045 ◽  
Author(s):  
Elizabeth L. Chin ◽  
Gabriel Simmons ◽  
Yasmine Y. Bouzid ◽  
Annie Kan ◽  
Dustin J. Burnett ◽  
...  

The Automated Self-Administered 24-Hour Dietary Assessment Tool (ASA24) is a free dietary recall system that outputs fewer nutrients than the Nutrition Data System for Research (NDSR). NDSR uses the Nutrition Coordinating Center (NCC) Food and Nutrient Database, both of which require a license. Manual lookup of ASA24 foods into NDSR is time-consuming but currently the only way to acquire NCC-exclusive nutrients. Using lactose as an example, we evaluated machine learning and database matching methods to estimate this NCC-exclusive nutrient from ASA24 reports. ASA24-reported foods were manually looked up into NDSR to obtain lactose estimates and split into training (n = 378) and test (n = 189) datasets. Nine machine learning models were developed to predict lactose from the nutrients common between ASA24 and the NCC database. Database matching algorithms were developed to match NCC foods to an ASA24 food using only nutrients (“Nutrient-Only”) or the nutrient and food descriptions (“Nutrient + Text”). For both methods, the lactose values were compared to the manual curation. Among machine learning models, the XGB-Regressor model performed best on held-out test data (R2 = 0.33). For the database matching method, Nutrient + Text matching yielded the best lactose estimates (R2 = 0.76), a vast improvement over the status quo of no estimate. These results suggest that computational methods can successfully estimate an NCC-exclusive nutrient for foods reported in ASA24.


2020 ◽  
Vol 2 (1) ◽  
pp. 3-6
Author(s):  
Eric Holloway

Imagination Sampling is the usage of a person as an oracle for generating or improving machine learning models. Previous work demonstrated a general system for using Imagination Sampling for obtaining multibox models. Here, the possibility of importing such models as the starting point for further automatic enhancement is explored.


2021 ◽  
Author(s):  
Norberto Sánchez-Cruz ◽  
Jose L. Medina-Franco

<p>Epigenetic targets are a significant focus for drug discovery research, as demonstrated by the eight approved epigenetic drugs for treatment of cancer and the increasing availability of chemogenomic data related to epigenetics. This data represents a large amount of structure-activity relationships that has not been exploited thus far for the development of predictive models to support medicinal chemistry efforts. Herein, we report the first large-scale study of 26318 compounds with a quantitative measure of biological activity for 55 protein targets with epigenetic activity. Through a systematic comparison of machine learning models trained on molecular fingerprints of different design, we built predictive models with high accuracy for the epigenetic target profiling of small molecules. The models were thoroughly validated showing mean precisions up to 0.952 for the epigenetic target prediction task. Our results indicate that the herein reported models have considerable potential to identify small molecules with epigenetic activity. Therefore, our results were implemented as freely accessible and easy-to-use web application.</p>


Sign in / Sign up

Export Citation Format

Share Document