scholarly journals MACHINE LEARNING TECHNOLOGIES IN BUSINESS BASED ON NEURAL NETWORKS

Author(s):  
Э. Д. Алисултанова ◽  
У. Р. Тасуев ◽  
Н. А. Моисеенко

В данной статье рассматриваются алгоритмы машинного обучения, которые строят математическую модель на основе выборочных данных, известных как «обучающая выборка» (training data) для исполнения прогнозных решений без явного задания алгоритма в целях выполнения поставленной задачи. Сложные маркетинговые проблемы рассматриваются с помощью технологий машинного обучения, уделяя первостепенное внимание индивидуальной поддержке клиентов и разработке новых продуктов. Предлагаемые решения на основе интеллектуальных систем бизнес-задач, представляющих наибольшую сложность, позволят прогнозировать возможные вариации поведения клиентов. Алгоритмы машинного обучения в данном случае для реализации бизнес-проектов используются для решения проблем, для которых сложно или невозможно разработать традиционный алгоритм для эффективного выполнения задачи. Примененные технологии машинного обучения помогают систематизировать и извлекать информацию из огромного набора необработанных данных. This paper discusses machine learning algorithms that construct a mathematical model based on sample data, known as “training data,” to execute predictive decisions without explicitly specifying an algorithm in order to perform a given task. Complex marketing issues are addressed through machine learning technologies, focusing on individual customer support and new product development.The proposed solutions based on intelligent systems of business tasks, which are the most complex, will predict possible variations in customer behavior. In this case, machine learning algorithms for implementing business projects are used to solve problems for which it is difficult or impossible to develop a traditional algorithm for efficiently performing a task. Applied machine learning technologies help systematize and extract information from a huge set of raw data.

2018 ◽  
Vol 6 (2) ◽  
pp. 283-286
Author(s):  
M. Samba Siva Rao ◽  
◽  
M.Yaswanth . ◽  
K. Raghavendra Swamy ◽  
◽  
...  

2021 ◽  
Vol 99 (Supplement_3) ◽  
pp. 264-265
Author(s):  
Duy Ngoc Do ◽  
Guoyu Hu ◽  
Younes Miar

Abstract American mink (Neovison vison) is the major source of fur for the fur industries worldwide and Aleutian disease (AD) is causing severe financial losses to the mink industry. Different methods have been used to diagnose the AD in mink, but the combination of several methods can be the most appropriate approach for the selection of AD resilient mink. Iodine agglutination test (IAT) and counterimmunoelectrophoresis (CIEP) methods are commonly employed in test-and-remove strategy; meanwhile, enzyme-linked immunosorbent assay (ELISA) and packed-cell volume (PCV) methods are complementary. However, using multiple methods are expensive; and therefore, hindering the corrected use of AD tests in selection. This research presented the assessments of the AD classification based on machine learning algorithms. The Aleutian disease was tested on 1,830 individuals using these tests in an AD positive mink farm (Canadian Centre for Fur Animal Research, NS, Canada). The accuracy of classification for CIEP was evaluated based on the sex information, and IAT, ELISA and PCV test results implemented in seven machine learning classification algorithms (Random Forest, Artificial Neural Networks, C50Tree, Naive Bayes, Generalized Linear Models, Boost, and Linear Discriminant Analysis) using the Caret package in R. The accuracy of prediction varied among the methods. Overall, the Random Forest was the best-performing algorithm for the current dataset with an accuracy of 0.89 in the training data and 0.94 in the testing data. Our work demonstrated the utility and relative ease of using machine learning algorithms to assess the CIEP information, and consequently reducing the cost of AD tests. However, further works require the inclusion of production and reproduction information in the models and extension of phenotypic collection to increase the accuracy of current methods.


Author(s):  
Namrata Dhanda ◽  
Stuti Shukla Datta ◽  
Mudrika Dhanda

Human intelligence is deeply involved in creating efficient and faster systems that can work independently. Creation of such smart systems requires efficient training algorithms. Thus, the aim of this chapter is to introduce the readers with the concept of machine learning and the commonly employed learning algorithm for developing efficient and intelligent systems. The chapter gives a clear distinction between supervised and unsupervised learning methods. Each algorithm is explained with the help of suitable example to give an insight to the learning process.


Diagnostics ◽  
2019 ◽  
Vol 9 (3) ◽  
pp. 104 ◽  
Author(s):  
Ahmed ◽  
Yigit ◽  
Isik ◽  
Alpkocak

Leukemia is a fatal cancer and has two main types: Acute and chronic. Each type has two more subtypes: Lymphoid and myeloid. Hence, in total, there are four subtypes of leukemia. This study proposes a new approach for diagnosis of all subtypes of leukemia from microscopic blood cell images using convolutional neural networks (CNN), which requires a large training data set. Therefore, we also investigated the effects of data augmentation for an increasing number of training samples synthetically. We used two publicly available leukemia data sources: ALL-IDB and ASH Image Bank. Next, we applied seven different image transformation techniques as data augmentation. We designed a CNN architecture capable of recognizing all subtypes of leukemia. Besides, we also explored other well-known machine learning algorithms such as naive Bayes, support vector machine, k-nearest neighbor, and decision tree. To evaluate our approach, we set up a set of experiments and used 5-fold cross-validation. The results we obtained from experiments showed that our CNN model performance has 88.25% and 81.74% accuracy, in leukemia versus healthy and multiclass classification of all subtypes, respectively. Finally, we also showed that the CNN model has a better performance than other wellknown machine learning algorithms.


2020 ◽  
Author(s):  
Hanna Meyer ◽  
Edzer Pebesma

<p>Spatial mapping is an important task in environmental science to reveal spatial patterns and changes of the environment. In this context predictive modelling using flexible machine learning algorithms has become very popular. However, looking at the diversity of modelled (global) maps of environmental variables, there might be increasingly the impression that machine learning is a magic tool to map everything. Recently, the reliability of such maps have been increasingly questioned, calling for a reliable quantification of uncertainties.</p><p>Though spatial (cross-)validation allows giving a general error estimate for the predictions, models are usually applied to make predictions for a much larger area or might even be transferred to make predictions for an area where they were not trained on. But by making predictions on heterogeneous landscapes, there will be areas that feature environmental properties that have not been observed in the training data and hence not learned by the algorithm. This is problematic as most machine learning algorithms are weak in extrapolations and can only make reliable predictions for environments with conditions the model has knowledge about. Hence predictions for environmental conditions that differ significantly from the training data have to be considered as uncertain.</p><p>To approach this problem, we suggest a measure of uncertainty that allows identifying locations where predictions should be regarded with care. The proposed uncertainty measure is based on distances to the training data in the multidimensional predictor variable space. However, distances are not equally relevant within the feature space but some variables are more important than others in the machine learning model and hence are mainly responsible for prediction patterns. Therefore, we weight the distances by the model-derived importance of the predictors. </p><p>As a case study we use a simulated area-wide response variable for Europe, bio-climatic variables as predictors, as well as simulated field samples. Random Forest is applied as algorithm to predict the simulated response. The model is then used to make predictions for entire Europe. We then calculate the corresponding uncertainty and compare it to the area-wide true prediction error. The results show that the uncertainty map reflects the patterns in the true error very well and considerably outperforms ensemble-based standard deviations of predictions as indicator for uncertainty.</p><p>The resulting map of uncertainty gives valuable insights into spatial patterns of prediction uncertainty which is important when the predictions are used as a baseline for decision making or subsequent environmental modelling. Hence, we suggest that a map of distance-based uncertainty should be given in addition to prediction maps.</p>


Author(s):  
E. C. Giovannini ◽  
A. Tomalini ◽  
E. Pristeri ◽  
L. Bergamasco ◽  
M. Lo Turco

Abstract. The paper presents DECAI - DEcay Classification using Artificial Intelligence, a novel study using machine learning algorithms to identify materials, degradations or surface gaps of an architectural artefact in a semi-automatic way. A customised software has been developed to allow the operator to choose which categories of materials to classify, and selecting sample data from an orthophoto of the artefact to train the machine learning algorithms. Thanks to Visual Programming Language algorithms, the classification results are directly imported into the H-BIM environment and used to enrich the H-BIM model of the artefact. To date, the developed tool is dedicated to research use only; future developments will improve the graphical interface to make this tool accessible to a wider public.


Author(s):  
James A. Tallman ◽  
Michal Osusky ◽  
Nick Magina ◽  
Evan Sewall

Abstract This paper provides an assessment of three different machine learning techniques for accurately reproducing a distributed temperature prediction of a high-pressure turbine airfoil. A three-dimensional Finite Element Analysis thermal model of a cooled turbine airfoil was solved repeatedly (200 instances) for various operating point settings of the corresponding gas turbine engine. The response surface created by the repeated solutions was fed into three machine learning algorithms and surrogate model representations of the FEA model’s response were generated. The machine learning algorithms investigated were a Gaussian Process, a Boosted Decision Tree, and an Artificial Neural Network. Additionally, a simple Linear Regression surrogate model was created for comparative purposes. The Artificial Neural Network model proved to be the most successful at reproducing the FEA model over the range of operating points. The mean and standard deviation differences between the FEA and the Neural Network models were 15% and 14% of a desired accuracy threshold, respectively. The Digital Thread for Design (DT4D) was used to expedite all model execution and machine learning training. A description of DT4D is also provided.


Author(s):  
Stylianos Chatzidakis ◽  
Miltiadis Alamaniotis ◽  
Lefteri H. Tsoukalas

Creep rupture is becoming increasingly one of the most important problems affecting behavior and performance of power production systems operating in high temperature environments and potentially under irradiation as is the case of nuclear reactors. Creep rupture forecasting and estimation of the useful life is required to avoid unanticipated component failure and cost ineffective operation. Despite the rigorous investigations of creep mechanisms and their effect on component lifetime, experimental data are sparse rendering the time to rupture prediction a rather difficult problem. An approach for performing creep rupture forecasting that exploits the unique characteristics of machine learning algorithms is proposed herein. The approach seeks to introduce a mechanism that will synergistically exploit recent findings in creep rupture with the state-of-the-art computational paradigm of machine learning. In this study, three machine learning algorithms, namely General Regression Neural Networks, Artificial Neural Networks and Gaussian Processes, were employed to capture the underlying trends and provide creep rupture forecasting. The current implementation is demonstrated and evaluated on actual experimental creep rupture data. Results show that the Gaussian process model based on the Matérn kernel achieved the best overall prediction performance (56.38%). Significant dependencies exist on the number of training data, neural network size, kernel selection and whether interpolation or extrapolation is performed.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. V67-V79 ◽  
Author(s):  
Yazeed Alaudah ◽  
Motaz Alfarraj ◽  
Ghassan AlRegib

Recently, there has been significant interest in various supervised machine learning techniques that can help reduce the time and effort consumed by manual interpretation workflows. However, most successful supervised machine learning algorithms require huge amounts of annotated training data. Obtaining these labels for large seismic volumes is a very time-consuming and laborious task. We have addressed this problem by presenting a weakly supervised approach for predicting the labels of various seismic structures. By having an interpreter select a very small number of exemplar images for every class of subsurface structures, we use a novel similarity-based retrieval technique to extract thousands of images that contain similar subsurface structures from the seismic volume. By assuming that similar images belong to the same class, we obtain thousands of image-level labels for these images; we validate this assumption. We have evaluated a novel weakly supervised algorithm for mapping these rough image-level labels into more accurate pixel-level labels that localize the different subsurface structures within the image. This approach dramatically simplifies the process of obtaining labeled data for training supervised machine learning algorithms on seismic interpretation tasks. Using our method, we generate thousands of automatically labeled images from the Netherlands Offshore F3 block with reasonably accurate pixel-level labels. We believe that this work will allow for more advances in machine learning-enabled seismic interpretation.


Author(s):  
Werner Kurschl ◽  
Stefan Mitsch ◽  
Johannes Schoenboeck

Pervasive healthcare applications aim at improving habitability by assisting individuals in living autonomously. To achieve this goal, data on an individual’s behavior and his or her environment (often collected with wireless sensors) is interpreted by machine learning algorithms; their decision finally leads to the initiation of appropriate actions, e.g., turning on the light. Developers of pervasive healthcare applications therefore face complexity stemming, amongst others, from different types of environmental and vital parameters, heterogeneous sensor platforms, unreliable network connections, as well as from different programming languages. Moreover, developing such applications often includes extensive prototyping work to collect large amounts of training data to optimize the machine learning algorithms. In this chapter the authors present a model-driven prototyping approach for the development of pervasive healthcare applications to leverage the complexity incurred in developing prototypes and applications. They support the approach with a development environment that simplifies application development with graphical editors, code generators, and pre-defined components.


Sign in / Sign up

Export Citation Format

Share Document