Component-based Assembling Tool and Runtime Engine for the Machine Learning Process

Author(s):  
Guo Hongqing ◽  
Su Peiyong ◽  
Guo Wenzhong ◽  
Guo Kun
Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2021 ◽  
Vol 18 ◽  
pp. 100346
Author(s):  
Farah Abdel Khalek ◽  
Marc Hartley ◽  
Eric Benoit ◽  
Stephane Perrin ◽  
Luc Marechal ◽  
...  

2020 ◽  
Author(s):  
Castro Mayleen Dorcas Bondoc ◽  
Tumibay Gilbert Malawit

Today many schools, universities and institutions recognize the necessity and importance of using Learning Management Systems (LMS) as part of their educational services. This research work has applied LMS in the teaching and learning process of Bulacan State University (BulSU) Graduate School (GS) Program that enhances the face-to-face instruction with online components. The researchers uses an LMS that provides educators a platform that can motivate and engage students to new educational environment through manage online classes. The LMS allows educators to distribute information, manage learning materials, assignments, quizzes, and communications. Aside from the basic functions of the LMS, the researchers uses Machine Learning (ML) Algorithms applying Support Vector Machine (SVM) that will classify and identify the best related videos per topic. SVM is a supervised machine learning algorithm that analyzes data for classification and regression analysis by Maity [1]. The results of this study showed that integration of video tutorials in LMS can significantly contribute knowledge and skills in the learning process of the students.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Qi Zhu ◽  
Ning Yuan ◽  
Donghai Guan

In recent years, self-paced learning (SPL) has attracted much attention due to its improvement to nonconvex optimization based machine learning algorithms. As a methodology introduced from human learning, SPL dynamically evaluates the learning difficulty of each sample and provides the weighted learning model against the negative effects from hard-learning samples. In this study, we proposed a cognitive driven SPL method, i.e., retrospective robust self-paced learning (R2SPL), which is inspired by the following two issues in human learning process: the misclassified samples are more impressive in upcoming learning, and the model of the follow-up learning process based on large number of samples can be used to reduce the risk of poor generalization in initial learning phase. We simultaneously estimated the degrees of learning-difficulty and misclassified in each step of SPL and proposed a framework to construct multilevel SPL for improving the robustness of the initial learning phase of SPL. The proposed method can be viewed as a multilayer model and the output of the previous layer can guide constructing robust initialization model of the next layer. The experimental results show that the R2SPL outperforms the conventional self-paced learning models in classification task.


Electronics ◽  
2019 ◽  
Vol 8 (11) ◽  
pp. 1267 ◽  
Author(s):  
Yonghoon Kim ◽  
and Mokdong Chung

In machine learning, performance is of great value. However, each learning process requires much time and effort in setting each parameter. The critical problem in machine learning is determining the hyperparameters, such as the learning rate, mini-batch size, and regularization coefficient. In particular, we focus on the learning rate, which is directly related to learning efficiency and performance. Bayesian optimization using a Gaussian Process is common for this purpose. In this paper, based on Bayesian optimization, we attempt to optimize the hyperparameters automatically by utilizing a Gamma distribution, instead of a Gaussian distribution, to improve the training performance of predicting image discrimination. As a result, our proposed method proves to be more reasonable and efficient in the estimation of learning rate when training the data, and can be useful in machine learning.


Author(s):  
Rasoul Hejazi ◽  
Andrew Grime ◽  
Mark Randolph ◽  
Mike Efthymiou

Abstract In-service integrity management (IM) of steel lazy wave risers (SLWRs) can benefit significantly from quantitative assessment of the overall risk of system failure as it can provide an effective tool for decision making. SLWRs are prone to fatigue failure within their touchdown zone (TDZ). This failure mode needs to be evaluated rigorously in riser IM processes because fatigue is an ongoing degradation mechanism threatening the structural integrity of risers throughout their service life. However, accurately evaluating the probability of fatigue failure for riser systems within a useful time frame is challenging due to the need to run a large number of nonlinear, dynamic numerical time domain simulations. Applying the Bayesian framework for machine learning, through the use of Gaussian Processes (GP) for regression, offers an attractive solution to overcome the burden of prohibitive simulation run times. GPs are stochastic, data-driven predictive models which incorporate the underlying physics of the problem in the learning process, and facilitate rapid probabilistic assessments with limited loss in accuracy. This paper proposes an efficient framework for practical implementation of a GP to create predictive models for the estimation of fatigue responses at SLWR hotspots. Such models are able to perform stochastic response prediction within a few milliseconds, thus enabling rapid prediction of the probability of SLWR fatigue failure. A realistic North West Shelf (NWS) case study is used to demonstrate the framework, comprising a 20” SLWR connected to a representative floating facility located in 950 m water depth. A full hindcast metocean dataset with associated statistical distributions are used for the riser long-term fatigue loading conditions. Numerical simulation and sampling techniques are adopted to generate a simulation-based dataset for training the data-driven model. In addition, a recently developed dimensionality reduction technique is employed to improve efficiency and reduce complexity of the learning process. The results show that the stochastic predictive models developed by the suggested framework can predict the long-term TDZ fatigue damage of SLWRs due to vessel motions with an acceptable level of accuracy for practical purposes.


Author(s):  
SANDA M. HARABAGIU

This paper presents a novel methodology of disambiguating prepositional phrase attachments. We create patterns of attachments by classifying a collection of prepositional relations derived from Treebank parses. As a by-product, the arguments of every prepositional relation are semantically disambiguated. Attachment decisions are generated as the result of a learning process, that builds upon some of the most popular current statistical and machine learning techniques. We have tested this methodology on (1) Wall Street Journal articles, (2) textual definitions of concepts from a dictionary and (3) an ad hoc corpus of Web documents, used for conceptual indexing and information extraction.


2019 ◽  
Vol 1 (1) ◽  
pp. 912-920
Author(s):  
Małgorzata Suchacka ◽  
Nicole Horáková

AbstractThe main goal of the study will be to pay attention to technologization of the learning process and its social dimensions in the context of artificial intelligence. The reflection will mainly cover selected theories of learning and knowledge management in the organization and its broadly understood environment. Considering the sociological dimensions of these phenomena is supposed to lead to the emphasis on the importance of the security of the human-organization-device relationship. Due to the interdisciplinary nature of the issue, the article will include references to the concept of artificial intelligence and machine learning. Difficult questions will arise around the ideas and will become the conclusion of the considerations.


2021 ◽  
Vol 248 ◽  
pp. 01012
Author(s):  
Anton Starodub ◽  
Natalia Eliseeva ◽  
Milen Georgiev

The research conducted in this paper is in the field of machine learning. The main object of the research is the learning process of an artificial neural network in order to increase its efficiency. The algorithm based on the analysis of retrospective learning data. The dynamics of changes in the values of the weights of an artificial neural network during training is an important indicator of training efficiency. The algorithm proposed in this work is based on changing the weight gradients values. Changing of the gradients weights makes it possible to understand how actively the network weights change during training. This knowledge helps to diagnose the training process and makes an adjusting the training parameters. The results of the algorithm can be used to train an artificial neural network. The network will help to determine the set of measures (actions) needed to optimize the learning process by the algorithm results.


Sign in / Sign up

Export Citation Format

Share Document