scholarly journals Towards Machine Learning Models as a Key Mean to Train and Optimize Multi-view Web Services Proxy Security Layer

Author(s):  
Anass Misbah ◽  
Ahmed Ettalbi

<p class="0abstractCxSpFirst">Muti-view Web services have brought many advantages regarding the early abstraction of end users needs and constraints. Thus, security has been positively impacted by this paradigm, particularly, within Web services applications area, and then Multi-view Web services.</p><p class="0abstractCxSpMiddle">In our previous work, we introduce the concept of Multi-view Web services to Internet of Things architecture within a Cloud infrastructure by proposing a Proxy Security Layer which consists of Multi-view Web services allowing the identification and categorizing of all interacting IoT objects and applications so as to increase the level of security and improve the control of transactions.</p><p class="0abstractCxSpLast">Besides, Artificial Intelligence and especially Machine Learning are growing fast and are making it possible to simulate human being intelligence in many domains; consequently, it is more and more possible to process automatically a large amount of data in order to make decision, bring new insights or even detect new threats / opportunities that we were not able to detect before by simple human means.</p>In this work, we are bringing together the power of the Machine Learning models and The Multi-view Web services Proxy Security Layer so as to verify permanently the consistency of the access rules, detect the suspicious intrusions, update the policy and also optimize the Multi-view Web services for a better performance of the whole Internet of Things architecture.

2022 ◽  
pp. 146-164
Author(s):  
Duygu Bagci Das ◽  
Derya Birant

Explainable artificial intelligence (XAI) is a concept that has emerged and become popular in recent years. Even interpretation in machine learning models has been drawing attention. Human activity classification (HAC) systems still lack interpretable approaches. In this study, an approach, called eXplainable HAC (XHAC), was proposed in which the data exploration, model structure explanation, and prediction explanation of the ML classifiers for HAR were examined to improve the explainability of the HAR models' components such as sensor types and their locations. For this purpose, various internet of things (IoT) sensors were considered individually, including accelerometer, gyroscope, and magnetometer. The location of these sensors (i.e., ankle, arm, and chest) was also taken into account. The important features were explored. In addition, the effect of the window size on the classification performance was investigated. According to the obtained results, the proposed approach makes the HAC processes more explainable compared to the black-box ML techniques.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

&lt;p&gt;Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model&amp;#8217;s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.&lt;/p&gt;


Author(s):  
Amandeep Singh Bhatia ◽  
Renata Wong

Quantum computing is a new exciting field which can be exploited to great speed and innovation in machine learning and artificial intelligence. Quantum machine learning at crossroads explores the interaction between quantum computing and machine learning, supplementing each other to create models and also to accelerate existing machine learning models predicting better and accurate classifications. The main purpose is to explore methods, concepts, theories, and algorithms that focus and utilize quantum computing features such as superposition and entanglement to enhance the abilities of machine learning computations enormously faster. It is a natural goal to study the present and future quantum technologies with machine learning that can enhance the existing classical algorithms. The objective of this chapter is to facilitate the reader to grasp the key components involved in the field to be able to understand the essentialities of the subject and thus can compare computations of quantum computing with its counterpart classical machine learning algorithms.


2021 ◽  
pp. 164-184
Author(s):  
Saiph Savage ◽  
Carlos Toxtli ◽  
Eber Betanzos-Torres

The artificial intelligence (AI) industry has created new jobs that are essential to the real world deployment of intelligent systems. Part of the job focuses on labelling data for machine learning models or having workers complete tasks that AI alone cannot do. These workers are usually known as ‘crowd workers’—they are part of a large distributed crowd that is jointly (but separately) working on the tasks although they are often invisible to end-users, leading to workers often being paid below minimum wage and having limited career growth. In this chapter, we draw upon the field of human–computer interaction to provide research methods for studying and empowering crowd workers. We present our Computational Worker Leagues which enable workers to work towards their desired professional goals and also supply quantitative information about crowdsourcing markets. This chapter demonstrates the benefits of this approach and highlights important factors to consider when researching the experiences of crowd workers.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171881956 ◽  
Author(s):  
Anja Bechmann ◽  
Geoffrey C Bowker

Artificial Intelligence (AI) in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding such seemingly invisible knowledge production in the machine learning development and design processes. We suggest a framework for studying such classification closely tied to different steps in the work process and exemplify the framework on two experiments with machine learning applied to Facebook data from one of our labs. By doing so we demonstrate ways in which classification and potential discrimination take place in even seemingly unsupervised and autonomous models. Moving away from concepts of non-supervision and autonomy enable us to understand the underlying classificatory dispositifs in the work process and that this form of analysis constitutes a first step towards governance of artificial intelligence.


Minerals ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1128
Author(s):  
Sebeom Park ◽  
Dahee Jung ◽  
Hoang Nguyen ◽  
Yosoon Choi

This study proposes a method for diagnosing problems in truck ore transport operations in underground mines using four machine learning models (i.e., Gaussian naïve Bayes (GNB), k-nearest neighbor (kNN), support vector machine (SVM), and classification and regression tree (CART)) and data collected by an Internet of Things system. A limestone underground mine with an applied mine production management system (using a tablet computer and Bluetooth beacon) is selected as the research area, and log data related to the truck travel time are collected. The machine learning models are trained and verified using the collected data, and grid search through 5-fold cross-validation is performed to improve the prediction accuracy of the models. The accuracy of CART is highest when the parameters leaf and split are set to 1 and 4, respectively (94.1%). In the validation of the machine learning models performed using the validation dataset (1500), the accuracy of the CART was 94.6%, and the precision and recall were 93.5% and 95.7%, respectively. In addition, it is confirmed that the F1 score reaches values as high as 94.6%. Through field application and analysis, it is confirmed that the proposed CART model can be utilized as a tool for monitoring and diagnosing the status of truck ore transport operations.


As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.


Sign in / Sign up

Export Citation Format

Share Document