Deriving Metamodels to Relate Machine Learning Quality to Design Repository Characteristics in the Context of Additive Manufacturing

Author(s):  
Glen Williams ◽  
Nicholas A. Meisel ◽  
Timothy W. Simpson ◽  
Christopher McComb

Abstract The widespread growth of additive manufacturing, a field with a complex informatic “digital thread”, has helped fuel the creation of design repositories, where multiple users can upload distribute, and download a variety of candidate designs for a variety of situations. Additionally, advancements in additive manufacturing process development, design frameworks, and simulation are increasing what is possible to fabricate with AM, further growing the richness of such repositories. Machine learning offers new opportunities to combine these design repository components’ rich geometric data with their associated process and performance data to train predictive models capable of automatically assessing build metrics related to AM part manufacturability. Although design repositories that can be used to train these machine learning constructs are expanding, our understanding of what makes a particular design repository useful as a machine learning training dataset is minimal. In this study we use a metamodel to predict the extent to which individual design repositories can train accurate convolutional neural networks. To facilitate the creation and refinement of this metamodel, we constructed a large artificial design repository, and subsequently split it into sub-repositories. We then analyzed metadata regarding the size, complexity, and diversity of the sub-repositories for use as independent variables predicting accuracy and the required training computational effort for training convolutional neural networks. The networks each predict one of three additive manufacturing build metrics: (1) part mass, (2) support material mass, and (3) build time. Our results suggest that metamodels predicting the convolutional neural network coefficient of determination, as opposed to computational effort, were most accurate. Moreover, the size of a design repository, the average complexity of its constituent designs, and the average and spread of design spatial diversity were the best predictors of convolutional neural network accuracy.

2019 ◽  
Vol 24 (3-4) ◽  
pp. 107-113
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network. With the cross platform means technology achieves the ability to run on multiple platforms without re-implementing for each platform


2019 ◽  
Vol 24 (1-2) ◽  
pp. 94-100
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network, based on the MobileNetv3 architecture, and with the optimal configuration of layers and network parameters. On the collected test dataset accuracy of over 98% is achieved.


Author(s):  
S.D. Pogorilyy ◽  
A.A. Kramov ◽  
P.V. Biletskyi

The estimation of text coherence is one of the most actual tasks of computer linguistics. Analysis of text coherence is widely used for writing and selection of documents. It allows clearly conveying the idea of an author to a reader. The importance of this task can be confirmed by the availability of actual works that are dedicated to solving it. Different automated methods for the estimation of text coherence are based on the methodology of machine learning. Corresponding methods are based on of formal text representation and following detection of regularities for the generation of an output result. The purpose of this work is to perform the analytic review of different automated methods for the estimation of text coherence; to justify method selection and adapt it due to the features of the Ukrainian language; to perform the experimental verification of the effectiveness of the suggested method for a Ukrainian corpus. In this paper, the comparative analysis of the methods for the estimation of coherence of English texts basing on a machine learning methodology has been performed. The expediency of application of methods that are based on trained universal models for the formalized representation of text components has been justified. The following models using neural networks with different architecture can be considered: recurrent and convolutional networks. These types of networks are widely used for text processing because they allow processing input data with an unfixed structure like sentences or words. Despite the ability of recurrent neural networks to take into account previous data (this behavior is similar to text perception by the reader), the convolutional neural network for conducting experimental research has been chosen. Such choice has been made due to the ability of convolutional neural networks to detect relations between entities regardless of the distance between them. In this paper, the principle of the method basing on the convolutional neural network and the corresponding architecture has been described. Program application for the verification of the suggested method effectiveness has been created. Formalized representation of text elements has been performed using a previously trained model for the semantic representation of words; the training process of this model has been implemented on the corpus of Ukrainian scientific abstracts. The training of the formed networks using pre-trained model has been performed. Experimental verification of method effectiveness for solving of document discrimination task and insert task has been made on the set of scientific articles. The results obtained may indicate that the method using convolutional neural networks can be used for further estimation of coherence of Ukrainian texts.


Author(s):  
A. A. Artemyev ◽  
E. A. Kazachkov ◽  
S. N. Matyugin ◽  
V. V. Sharonov

This paper considers the problem of classifying surface water objects, e.g. ships of different classes, in visible spectrum images using convolutional neural networks. A technique for forming a database of images of surface water objects and a special training dataset for creating a classification are presented. A method for forming and training of a convolutional neural network is described. The dependence of the probability of correct recognition on the number and variants of the selection of specific classes of surface water objects is analysed. The results of recognizing different sets of classes are presented.


2021 ◽  
Author(s):  
Blessy Babu ◽  
Hari V Sreeniva

Abstract This paper summarizes the intelligent detection of modulation scheme in an incoming signal, build on convolutional neural network (CNN). It describes the creation of training dataset, realization of CNN, testing and validation. The raw modulated signals are converted into 2D and put on to the network for training. The resulting prototype is adopted for detection. The results signify that the intended approach gives better prediction for the identification of modulated signal without need for any selective feature extraction. The system performance on noise is also evaluated and modelled.


2019 ◽  
Vol 141 (11) ◽  
Author(s):  
Glen Williams ◽  
Nicholas A. Meisel ◽  
Timothy W. Simpson ◽  
Christopher McComb

Abstract Machine learning can be used to automate common or time-consuming engineering tasks for which sufficient data already exist. For instance, design repositories can be used to train deep learning algorithms to assess component manufacturability; however, methods to determine the suitability of a design repository for use with machine learning do not exist. We provide an initial investigation toward identifying such a method using “artificial” design repositories to experimentally test the extent to which altering properties of the dataset impacts the assessment precision and generalizability of neural networks trained on the data. For this experiment, we use a 3D convolutional neural network to estimate quantitative manufacturing metrics directly from voxel-based component geometries. Additive manufacturing (AM) is used as a case study because of the recent growth of AM-focused design repositories such as GrabCAD and Thingiverse that are readily accessible online. In this study, we focus only on material extrusion, the dominant consumer AM process, and investigate three AM build metrics: (1) part mass, (2) support material mass, and (3) build time. Additionally, we compare the convolutional neural network accuracy to that of a baseline multiple linear regression model. Our results suggest that training on design repositories with less standardized orientation and position resulted in more accurate trained neural networks and that orientation-dependent metrics were harder to estimate than orientation-independent metrics. Furthermore, the convolutional neural network was more accurate than the baseline linear regression model for all build metrics.


2020 ◽  
Vol 7 (4) ◽  
pp. 787
Author(s):  
Nurmi Hidayasari ◽  
Imam Riadi ◽  
Yudi Prayudi

<p>Steganalisis digunakan untuk mendeteksi ada atau tidaknya file steganografi. Salah satu kategori steganalisis adalah blind steganalisis, yaitu cara untuk mendeteksi file rahasia tanpa mengetahui metode steganografi apa yang digunakan. Sebuah penelitian mengusulkan bahwa metode Convolutional Neural Networks (CNN) dapat mendeteksi file steganografi menggunakan metode terbaru dengan nilai probabilitas kesalahan rendah dibandingkan metode lain, yaitu CNN Yedroudj-net. Sebagai metode steganalisis Machine Learning terbaru, diperlukan eksperimen untuk mengetahui apakah Yedroudj-net dapat menjadi steganalisis untuk keluaran dari tools steganografi yang biasa digunakan. Mengetahui kinerja CNN Yedroudj-net sangat penting, untuk mengukur tingkat kemampuannya dalam hal steganalisis dari beberapa tools. Apalagi sejauh ini, kinerja Machine Learning masih diragukan dalam blind steganalisis. Ditambah beberapa penelitian sebelumnya hanya berfokus pada metode tertentu untuk membuktikan kinerja teknik yang diusulkan, termasuk Yedroudj-net. Penelitian ini akan menggunakan lima alat yang cukup baik dalam hal steganografi, yaitu Hide In Picture (HIP), OpenStego, SilentEye, Steg dan S-Tools, yang tidak diketahui secara pasti metode steganografi apa yang digunakan pada alat tersebut. Metode Yedroudj-net akan diimplementasikan dalam file steganografi dari output lima alat. Kemudian perbandingan dengan tools steganalisis lain, yaitu StegSpy. Hasil penelitian menunjukkan bahwa Yedroudj-net bisa mendeteksi keberadaan file steganografi. Namun, jika dibandingkan dengan StegSpy hasil gambar yang tidak terdeteksi lebih tinggi.</p><p><em><strong><br /></strong></em></p><p><em><strong>Abstract</strong></em></p><p><em>Steganalysis is used to detect the presence or absence of steganograpy files. One category of steganalysis is blind steganalysis, which is a way to detect secret files without knowing what steganography method is used. A study proposes that the Convolutional Neural Networks (CNN) method can detect steganographic files using the latest method with a low error probability value compared to other methods, namely CNN Yedroudj-net. As the latest Machine Learning steganalysis method, an experiment is needed to find out whether Yedroudj-net can be a steganalysis for the output of commonly used steganography tools. Knowing the performance of CNN Yedroudj-net is very important, to measure the level of ability in terms of steganalysis from several tools. Especially so far, Machine Learning performance is still doubtful in blind steganalysis. Plus some previous research only focused on certain methods to prove the performance of the proposed technique, including Yedroudj-net. This research will use five tools that are good enough in terms of steganography, namely Hide In Picture (HIP), OpenStego, SilentEye, Steg and S-Tools, which is not known exactly what steganography methods are used on the tool. The Yedroudj-net method will be implemented in a steganographic file from the output of five tools. Then compare with other steganalysis tools, namely StegSpy. The results showed that Yedroudj-net could detect the presence of steganographic files. However, when compared with StegSpy the results of undetected images are higher.</em></p>


2018 ◽  
Vol 8 (9) ◽  
pp. 1573 ◽  
Author(s):  
Vladimir Kulyukin ◽  
Sarbajit Mukherjee ◽  
Prakhar Amlathe

Electronic beehive monitoring extracts critical information on colony behavior and phenology without invasive beehive inspections and transportation costs. As an integral component of electronic beehive monitoring, audio beehive monitoring has the potential to automate the identification of various stressors for honeybee colonies from beehive audio samples. In this investigation, we designed several convolutional neural networks and compared their performance with four standard machine learning methods (logistic regression, k-nearest neighbors, support vector machines, and random forests) in classifying audio samples from microphones deployed above landing pads of Langstroth beehives. On a dataset of 10,260 audio samples where the training and testing samples were separated from the validation samples by beehive and location, a shallower raw audio convolutional neural network with a custom layer outperformed three deeper raw audio convolutional neural networks without custom layers and performed on par with the four machine learning methods trained to classify feature vectors extracted from raw audio samples. On a more challenging dataset of 12,914 audio samples where the training and testing samples were separated from the validation samples by beehive, location, time, and bee race, all raw audio convolutional neural networks performed better than the four machine learning methods and a convolutional neural network trained to classify spectrogram images of audio samples. A trained raw audio convolutional neural network was successfully tested in situ on a low voltage Raspberry Pi computer, which indicates that convolutional neural networks can be added to a repertoire of in situ audio classification algorithms for electronic beehive monitoring. The main trade-off between deep learning and standard machine learning is between feature engineering and training time: while the convolutional neural networks required no feature engineering and generalized better on the second, more challenging dataset, they took considerably more time to train than the machine learning methods. To ensure the replicability of our findings and to provide performance benchmarks for interested research and citizen science communities, we have made public our source code and our curated datasets.


Author(s):  
E. Yu. Shchetinin

The recognition of human emotions is one of the most relevant and dynamically developing areas of modern speech technologies, and the recognition of emotions in speech (RER) is the most demanded part of them. In this paper, we propose a computer model of emotion recognition based on an ensemble of bidirectional recurrent neural network with LSTM memory cell and deep convolutional neural network ResNet18. In this paper, computer studies of the RAVDESS database containing emotional speech of a person are carried out. RAVDESS-a data set containing 7356 files. Entries contain the following emotions: 0 – neutral, 1 – calm, 2 – happiness, 3 – sadness, 4 – anger, 5 – fear, 6 – disgust, 7 – surprise. In total, the database contains 16 classes (8 emotions divided into male and female) for a total of 1440 samples (speech only). To train machine learning algorithms and deep neural networks to recognize emotions, existing audio recordings must be pre-processed in such a way as to extract the main characteristic features of certain emotions. This was done using Mel-frequency cepstral coefficients, chroma coefficients, as well as the characteristics of the frequency spectrum of audio recordings. In this paper, computer studies of various models of neural networks for emotion recognition are carried out on the example of the data described above. In addition, machine learning algorithms were used for comparative analysis. Thus, the following models were trained during the experiments: logistic regression (LR), classifier based on the support vector machine (SVM), decision tree (DT), random forest (RF), gradient boosting over trees – XGBoost, convolutional neural network CNN, recurrent neural network RNN (ResNet18), as well as an ensemble of convolutional and recurrent networks Stacked CNN-RNN. The results show that neural networks showed much higher accuracy in recognizing and classifying emotions than the machine learning algorithms used. Of the three neural network models presented, the CNN + BLSTM ensemble showed higher accuracy.


2020 ◽  
Vol 10 (6) ◽  
pp. 2104
Author(s):  
Michał Tomaszewski ◽  
Paweł Michalski ◽  
Jakub Osuchowski

This article presents an analysis of the effectiveness of object detection in digital images with the application of a limited quantity of input. The possibility of using a limited set of learning data was achieved by developing a detailed scenario of the task, which strictly defined the conditions of detector operation in the considered case of a convolutional neural network. The described solution utilizes known architectures of deep neural networks in the process of learning and object detection. The article presents comparisons of results from detecting the most popular deep neural networks while maintaining a limited training set composed of a specific number of selected images from diagnostic video. The analyzed input material was recorded during an inspection flight conducted along high-voltage lines. The object detector was built for a power insulator. The main contribution of the presented papier is the evidence that a limited training set (in our case, just 60 training frames) could be used for object detection, assuming an outdoor scenario with low variability of environmental conditions. The decision of which network will generate the best result for such a limited training set is not a trivial task. Conducted research suggests that the deep neural networks will achieve different levels of effectiveness depending on the amount of training data. The most beneficial results were obtained for two convolutional neural networks: the faster region-convolutional neural network (faster R-CNN) and the region-based fully convolutional network (R-FCN). Faster R-CNN reached the highest AP (average precision) at a level of 0.8 for 60 frames. The R-FCN model gained a worse AP result; however, it can be noted that the relationship between the number of input samples and the obtained results has a significantly lower influence than in the case of other CNN models, which, in the authors’ assessment, is a desired feature in the case of a limited training set.


Sign in / Sign up

Export Citation Format

Share Document