Towards machine learning for architectural fabrication in the age of industry 4.0

2020 ◽  
Vol 18 (4) ◽  
pp. 335-352
Author(s):  
Mette Ramsgaard Thomsen ◽  
Paul Nicholas ◽  
Martin Tamke ◽  
Sebastian Gatz ◽  
Yuliya Sinke ◽  
...  

Machine Learning (ML) is opening new perspectives for architectural fabrication, as it holds the potential for the profession to shortcut the currently tedious and costly setup of digital integrated design to fabrication workflows and make these more adaptable. To establish and alter these workflows rapidly becomes a main concern with the advent of Industry 4.0 in building industry. In this article we present two projects, which presents how ML can lead to radical changes in generation of fabrication data and linking these directly to design intent. We investigate two different moments of implementation: linking performance to the generation of fabrication data (KnitCone) and integrating the ability to adapt fabrication data in realtime as response to fabrication processes (Neural-Network Steered Robotic Fabrication). Together they examine how models can employ design information as training data and be trained to by step processes within the digital chain. We detail the advantages and limitations of each experiment, we reflect on core questions and perspectives of ML for architectural fabrication: the nature of data to be used, the capacity of these algorithms to encode complexity and generalize results, their task-specificness versus their adaptability and the tradeoffs of using them with respect to conventional explicit analytical modelling.

Animals ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. 771
Author(s):  
Toshiya Arakawa

Mammalian behavior is typically monitored by observation. However, direct observation requires a substantial amount of effort and time, if the number of mammals to be observed is sufficiently large or if the observation is conducted for a prolonged period. In this study, machine learning methods as hidden Markov models (HMMs), random forests, support vector machines (SVMs), and neural networks, were applied to detect and estimate whether a goat is in estrus based on the goat’s behavior; thus, the adequacy of the method was verified. Goat’s tracking data was obtained using a video tracking system and used to estimate whether they, which are in “estrus” or “non-estrus”, were in either states: “approaching the male”, or “standing near the male”. Totally, the PC of random forest seems to be the highest. However, The percentage concordance (PC) value besides the goats whose data were used for training data sets is relatively low. It is suggested that random forest tend to over-fit to training data. Besides random forest, the PC of HMMs and SVMs is high. However, considering the calculation time and HMM’s advantage in that it is a time series model, HMM is better method. The PC of neural network is totally low, however, if the more goat’s data were acquired, neural network would be an adequate method for estimation.


Water ◽  
2021 ◽  
Vol 13 (22) ◽  
pp. 3294
Author(s):  
Chentao He ◽  
Jiangfeng Wei ◽  
Yuanyuan Song ◽  
Jing-Jia Luo

The middle and lower reaches of the Yangtze River valley (YRV), which are among the most densely populated regions in China, are subject to frequent flooding. In this study, the predictor importance analysis model was used to sort and select predictors, and five methods (multiple linear regression (MLR), decision tree (DT), random forest (RF), backpropagation neural network (BPNN), and convolutional neural network (CNN)) were used to predict the interannual variation of summer precipitation over the middle and lower reaches of the YRV. Predictions from eight climate models were used for comparison. Of the five tested methods, RF demonstrated the best predictive skill. Starting the RF prediction in December, when its prediction skill was highest, the 70-year correlation coefficient from cross validation of average predictions was 0.473. Using the same five predictors in December 2019, the RF model successfully predicted the YRV wet anomaly in summer 2020, although it had weaker amplitude. It was found that the enhanced warm pool area in the Indian Ocean was the most important causal factor. The BPNN and CNN methods demonstrated the poorest performance. The RF, DT, and climate models all showed higher prediction skills when the predictions start in winter than in early spring, and the RF, DT, and MLR methods all showed better prediction skills than the numerical climate models. Lack of training data was a factor that limited the performance of the machine learning methods. Future studies should use deep learning methods to take full advantage of the potential of ocean, land, sea ice, and other factors for more accurate climate predictions.


2020 ◽  
pp. 808-817
Author(s):  
Vinh Pham ◽  
◽  
Eunil Seo ◽  
Tai-Myoung Chung

Identifying threats contained within encrypted network traffic poses a great challenge to Intrusion Detection Systems (IDS). Because traditional approaches like deep packet inspection could not operate on encrypted network traffic, machine learning-based IDS is a promising solution. However, machine learning-based IDS requires enormous amounts of statistical data based on network traffic flow as input data and also demands high computing power for processing, but is slow in detecting intrusions. We propose a lightweight IDS that transforms raw network traffic into representation images. We begin by inspecting the characteristics of malicious network traffic of the CSE-CIC-IDS2018 dataset. We then adapt methods for effectively representing those characteristics into image data. A Convolutional Neural Network (CNN) based detection model is used to identify malicious traffic underlying within image data. To demonstrate the feasibility of the proposed lightweight IDS, we conduct three simulations on two datasets that contain encrypted traffic with current network attack scenarios. The experiment results show that our proposed IDS is capable of achieving 95% accuracy with a reasonable detection time while requiring relatively small size training data.


When pancreas fails to secrete sufficient insulin in the human body, the glucose level in blood either becomes too high or too low. This fluctuation in glucose level affects different body organs such as kidney, brain, and eye. When the complications start appearing in the eyes due to Diabetic Mellitus (DM), it is called Diabetic Retinopathy (DR). DR can be categorized in several classes based on the severity, it can be Microaneurysms (ME), Haemorrhages (HE), Hard and Soft Exudates (EX and SE). DR is a slow start process that starts with very mild symptoms, becomes moderate with the time and results in complete vision loss, if not detected on time. Early-stage detection may greatly bolster in vision loss. However, it is impassable to detect the symptoms of DR with naked eyes. Ophthalmologist harbor to the several approaches and algorithm which makes use of different Machine Learning (ML) methods and classifiers to overcome this disease. The burgeoning insistence of Convolutional Neural Network (CNN) and their advancement in extracting features from different fundus images captivate several researchers to strive on it. Transfer Learning (TL) techniques help to use pre-trained CNN on a dataset that has finite training data, especially that in under developing countries. In this work, we propose several CNN architecture along with distinct classifiers which segregate the different lesions (ME and EX) in DR images with very eye-catching accuracies.


2022 ◽  
pp. 1559-1575
Author(s):  
Mário Pereira Véstias

Machine learning is the study of algorithms and models for computing systems to do tasks based on pattern identification and inference. When it is difficult or infeasible to develop an algorithm to do a particular task, machine learning algorithms can provide an output based on previous training data. A well-known machine learning model is deep learning. The most recent deep learning models are based on artificial neural networks (ANN). There exist several types of artificial neural networks including the feedforward neural network, the Kohonen self-organizing neural network, the recurrent neural network, the convolutional neural network, the modular neural network, among others. This article focuses on convolutional neural networks with a description of the model, the training and inference processes and its applicability. It will also give an overview of the most used CNN models and what to expect from the next generation of CNN models.


2020 ◽  
Vol 36 (3) ◽  
pp. 1166-1187 ◽  
Author(s):  
Shohei Naito ◽  
Hiromitsu Tomozawa ◽  
Yuji Mori ◽  
Takeshi Nagata ◽  
Naokazu Monma ◽  
...  

This article presents a method for detecting damaged buildings in the event of an earthquake using machine learning models and aerial photographs. We initially created training data for machine learning models using aerial photographs captured around the town of Mashiki immediately after the main shock of the 2016 Kumamoto earthquake. All buildings are classified into one of the four damage levels by visual interpretation. Subsequently, two damage discrimination models are developed: a bag-of-visual-words model and a model based on a convolutional neural network. Results are compared and validated in terms of accuracy, revealing that the latter model is preferable. Moreover, for the convolutional neural network model, the target areas are expanded and the recalls of damage classification at the four levels range approximately from 66% to 81%.


2018 ◽  
Vol 8 (12) ◽  
pp. 2663 ◽  
Author(s):  
Davy Preuveneers ◽  
Vera Rimmer ◽  
Ilias Tsingenopoulos ◽  
Jan Spooren ◽  
Wouter Joosen ◽  
...  

The adoption of machine learning and deep learning is on the rise in the cybersecurity domain where these AI methods help strengthen traditional system monitoring and threat detection solutions. However, adversaries too are becoming more effective in concealing malicious behavior amongst large amounts of benign behavior data. To address the increasing time-to-detection of these stealthy attacks, interconnected and federated learning systems can improve the detection of malicious behavior by joining forces and pooling together monitoring data. The major challenge that we address in this work is that in a federated learning setup, an adversary has many more opportunities to poison one of the local machine learning models with malicious training samples, thereby influencing the outcome of the federated learning and evading detection. We present a solution where contributing parties in federated learning can be held accountable and have their model updates audited. We describe a permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger. By integrating federated learning with blockchain technology, our solution supports the auditing of machine learning models without the necessity to centralize the training data. Experiments with a realistic intrusion detection use case and an autoencoder for anomaly detection illustrate that the increased complexity caused by blockchain technology has a limited performance impact on the federated learning, varying between 5 and 15%, while providing full transparency over the distributed training process of the neural network. Furthermore, our blockchain-based federated learning solution can be generalized and applied to more sophisticated neural network architectures and other use cases.


Processes ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 235 ◽  
Author(s):  
Diego Ceballos ◽  
Diana López-Álvarez ◽  
Gustavo Isaza ◽  
Reinel Tabares-Soto ◽  
Simón Orozco-Arias ◽  
...  

Bacterial infections are a major global concern, since they can lead to public health problems. To address this issue, bioinformatics contributes extensively with the analysis and interpretation of in silico data by enabling to genetically characterize different individuals/strains, such as in bacteria. However, the growing volume of metagenomic data requires new infrastructure, technologies, and methodologies that support the analysis and prediction of this information from a clinical point of view, as intended in this work. On the other hand, distributed computational environments allow the management of these large volumes of data, due to significant advances in processing architectures, such as multicore CPU (Central Process Unit) and GPGPU (General Propose Graphics Process Unit). For this purpose, we developed a bioinformatics workflow based on filtered metagenomic data with Duk tool. Data formatting was done through Emboss software and a prototype of a workflow. A pipeline was also designed and implemented in bash script based on machine learning. Further, Python 3 programming language was used to normalize the training data of the artificial neural network, which was implemented in the TensorFlow framework, and its behavior was visualized in TensorBoard. Finally, the values from the initial bioinformatics process and the data generated during the parameterization and optimization of the Artificial Neural Network are presented and validated based on the most optimal result for the identification of the CTX-M gene group.


2021 ◽  
Vol 7 ◽  
pp. e629
Author(s):  
Mohammad reza Rezaei ◽  
Mahmoud Houshmand ◽  
Omid Fatahi Valilai

Additive manufacturing, artificial intelligence and cloud manufacturing are three pillars of the emerging digitized industrial revolution, considered in industry 4.0. The literature shows that in industry 4.0, intelligent cloud based additive manufacturing plays a crucial role. Considering this, few studies have accomplished an integration of the intelligent additive manufacturing and the service oriented manufacturing paradigms. This is due to the lack of prerequisite frameworks to enable this integration. These frameworks should create an autonomous platform for cloud based service composition for additive manufacturing based on customer demands. One of the most important requirements of customer processing in autonomous manufacturing platforms is the interpretation of the product shape; as a result, accurate and automated shape interpretation plays an important role in this integration. Unfortunately despite this fact, accurate shape interpretation has not been a subject of research studies in the additive manufacturing, except limited studies aiming machine level production process. This paper has proposed a framework to interpret shapes, or their informative two dimensional pictures, automatically by decomposing them into simpler shapes which can be categorized easily based on provided training data. To do this, two algorithms which apply a Recurrent Neural Network and a two dimensional Convolutional Neural Network as decomposition and recognition tools respectively are proposed. These two algorithms are integrated and case studies are designed to demonstrate the capabilities of the proposed platform. The results suggest that considering the complex objects which can be decomposed with planes perpendicular to one axis of Cartesian coordination system and parallel withother two, the decomposition algorithm can even give results using an informative 2D image of the object.


2021 ◽  
Author(s):  
Prageeth R. Wijewardhane ◽  
Krupal P. Jethava ◽  
Jonathan A Fine ◽  
Gaurav Chopra

The Programmed Cell Death Protein 1/Programmed Death-Ligand 1 (PD-1/PD-L1) interaction is an immune checkpoint utilized by cancer cells to enhance immune suppression. There is a huge need to develop small molecule drugs that are fast acting, cost effective, and readily bioavailable compared to antibodies. Unfortunately, synthesizing and validating large libraries of small- molecules to inhibit PD-1/PD-L1 interaction in a blind manner is both time-consuming and expensive. To improve this drug discovery pipeline, we have developed a machine learning methodology trained on patent data to identify, synthesize, and validate PD-1/PD-L1 small molecule inhibitors. Our model incorporates two features: docking scores to represent the energy of binding (E) as a global feature and sub-graph features through a graph neural network (GNN) of molecular topology to represent local features. This interaction energy-based Graph Neural Network (EGNN) model outperforms traditional machine learning methods and a simple GNN with a F1 score of 0.9524 and Cohen’s kappa score of 0.8861 for the hold out test set, suggesting that the topology of the small molecule, the structural interaction in the binding pocket, and chemical diversity of the training data are all important considerations for enhancing model performance. A Bootstrapped EGNN model was used to select compounds for synthesis and experimental validation with predicted high and low potency to inhibit PD-1/PD-L1 interaction. The potent inhibitor, (4-((3-(2,3-dihydrobenzo[b][1,4]dioxin-6-yl)-2- methylbenzyl)oxy)-2,6-dimethoxybenzyl)-D-serine, is a hybrid of two known bioactive scaffolds, with an IC50 of 339.9 nM that is comparatively better than the known bioactive compound. We conclude that our bootstrapped EGNN model will be useful to identify target-specific high potency molecules designed by scaffold hopping, a well-known medicinal chemistry technique.


Sign in / Sign up

Export Citation Format

Share Document