scholarly journals Artificial Intelligence Based Machine Learning Models Predict Sperm parameter Upgrading after Varicocele Repair: A Multi-Institutional Analysis

2022 ◽  
Vol 40 ◽  
Author(s):  
Jesse Ory ◽  
Michael B. Tradewell ◽  
Udi Blankstein ◽  
Thiago F. Lima ◽  
Sirpi Nackeeran ◽  
...  
Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 18
Author(s):  
Pantelis Linardatos ◽  
Vasilis Papastefanopoulos ◽  
Sotiris Kotsiantis

Recent advances in artificial intelligence (AI) have led to its widespread industrial adoption, with machine learning systems demonstrating superhuman performance in a significant number of tasks. However, this surge in performance, has often been achieved through increased model complexity, turning such systems into “black box” approaches and causing uncertainty regarding the way they operate and, ultimately, the way that they come to decisions. This ambiguity has made it problematic for machine learning systems to be adopted in sensitive yet critical domains, where their value could be immense, such as healthcare. As a result, scientific interest in the field of Explainable Artificial Intelligence (XAI), a field that is concerned with the development of new methods that explain and interpret machine learning models, has been tremendously reignited over recent years. This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.


2021 ◽  
Author(s):  
Ramy Abdallah ◽  
Clare E. Bond ◽  
Robert W.H. Butler

<p>Machine learning is being presented as a new solution for a wide range of geoscience problems. Primarily machine learning has been used for 3D seismic data processing, seismic facies analysis and well log data correlation. The rapid development in technology with open-source artificial intelligence libraries and the accessibility of affordable computer graphics processing units (GPU) makes the application of machine learning in geosciences increasingly tractable. However, the application of artificial intelligence in structural interpretation workflows of subsurface datasets is still ambiguous. This study aims to use machine learning techniques to classify images of folds and fold-thrust structures. Here we show that convolutional neural networks (CNNs) as supervised deep learning techniques provide excellent algorithms to discriminate between geological image datasets. Four different datasets of images have been used to train and test the machine learning models. These four datasets are a seismic character dataset with five classes (faults, folds, salt, flat layers and basement), folds types with three classes (buckle, chevron and conjugate), fault types with three classes (normal, reverse and thrust) and fold-thrust geometries with three classes (fault bend fold, fault propagation fold and detachment fold). These image datasets are used to investigate three machine learning models. One Feedforward linear neural network model and two convolutional neural networks models (Convolution 2d layer transforms sequential model and Residual block model (ResNet with 9, 34, and 50 layers)). Validation and testing datasets forms a critical part of testing the model’s performance accuracy. The ResNet model records the highest performance accuracy score, of the machine learning models tested. Our CNN image classification model analysis provides a framework for applying machine learning to increase structural interpretation efficiency, and shows that CNN classification models can be applied effectively to geoscience problems. The study provides a starting point to apply unsupervised machine learning approaches to sub-surface structural interpretation workflows.</p>


Author(s):  
Amandeep Singh Bhatia ◽  
Renata Wong

Quantum computing is a new exciting field which can be exploited to great speed and innovation in machine learning and artificial intelligence. Quantum machine learning at crossroads explores the interaction between quantum computing and machine learning, supplementing each other to create models and also to accelerate existing machine learning models predicting better and accurate classifications. The main purpose is to explore methods, concepts, theories, and algorithms that focus and utilize quantum computing features such as superposition and entanglement to enhance the abilities of machine learning computations enormously faster. It is a natural goal to study the present and future quantum technologies with machine learning that can enhance the existing classical algorithms. The objective of this chapter is to facilitate the reader to grasp the key components involved in the field to be able to understand the essentialities of the subject and thus can compare computations of quantum computing with its counterpart classical machine learning algorithms.


2021 ◽  
pp. 164-184
Author(s):  
Saiph Savage ◽  
Carlos Toxtli ◽  
Eber Betanzos-Torres

The artificial intelligence (AI) industry has created new jobs that are essential to the real world deployment of intelligent systems. Part of the job focuses on labelling data for machine learning models or having workers complete tasks that AI alone cannot do. These workers are usually known as ‘crowd workers’—they are part of a large distributed crowd that is jointly (but separately) working on the tasks although they are often invisible to end-users, leading to workers often being paid below minimum wage and having limited career growth. In this chapter, we draw upon the field of human–computer interaction to provide research methods for studying and empowering crowd workers. We present our Computational Worker Leagues which enable workers to work towards their desired professional goals and also supply quantitative information about crowdsourcing markets. This chapter demonstrates the benefits of this approach and highlights important factors to consider when researching the experiences of crowd workers.


2019 ◽  
Vol 6 (1) ◽  
pp. 205395171881956 ◽  
Author(s):  
Anja Bechmann ◽  
Geoffrey C Bowker

Artificial Intelligence (AI) in the form of different machine learning models is applied to Big Data as a way to turn data into valuable knowledge. The rhetoric is that ensuing predictions work well—with a high degree of autonomy and automation. We argue that we need to analyze the process of applying machine learning in depth and highlight at what point human knowledge production takes place in seemingly autonomous work. This article reintroduces classification theory as an important framework for understanding such seemingly invisible knowledge production in the machine learning development and design processes. We suggest a framework for studying such classification closely tied to different steps in the work process and exemplify the framework on two experiments with machine learning applied to Facebook data from one of our labs. By doing so we demonstrate ways in which classification and potential discrimination take place in even seemingly unsupervised and autonomous models. Moving away from concepts of non-supervision and autonomy enable us to understand the underlying classificatory dispositifs in the work process and that this form of analysis constitutes a first step towards governance of artificial intelligence.


As Artificial Intelligence penetrates all aspects of human life, more and more questions about ethical practices and fair uses arise, which has motivated the research community to look inside and develop methods to interpret these Artificial Intelligence/Machine Learning models. This concept of interpretability can not only help with the ethical questions but also can provide various insights into the working of these machine learning models, which will become crucial in trust-building and understanding how a model makes decisions. Furthermore, in many machine learning applications, the feature of interpretability is the primary value that they offer. However, in practice, many developers select models based on the accuracy score and disregarding the level of interpretability of that model, which can be chaotic as predictions by many high accuracy models are not easily explainable. In this paper, we introduce the concept of Machine Learning Model Interpretability, Interpretable Machine learning, and the methods used for interpretation and explanations.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Basim Mahbooba ◽  
Mohan Timilsina ◽  
Radhya Sahal ◽  
Martin Serrano

Despite the growing popularity of machine learning models in the cyber-security applications (e.g., an intrusion detection system (IDS)), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence (XAI) has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. The previous studies focused more on the accuracy of the various classification algorithms for trust in IDS. They do not often provide insights into their behavior and reasoning provided by the sophisticated algorithm. Therefore, in this paper, we have addressed XAI concept to enhance trust management by exploring the decision tree model in the area of IDS. We use simple decision tree algorithms that can be easily read and even resemble a human approach to decision-making by splitting the choice into many small subchoices for IDS. We experimented with this approach by extracting rules in a widely used KDD benchmark dataset. We also compared the accuracy of the decision tree approach with the other state-of-the-art algorithms.


Author(s):  
S. Sasikala ◽  
S. J. Subhashini ◽  
P. Alli ◽  
J. Jane Rubel Angelina

Machine learning is a technique of parsing data, learning from that data, and then applying what has been learned to make informed decisions. Deep learning is actually a subset of machine learning. It technically is machine learning and functions in the same way, but it has different capabilities. The main difference between deep and machine learning is, machine learning models become well progressively, but the model still needs some guidance. If a machine learning model returns an inaccurate prediction, then the programmer needs to fix that problem explicitly, but in the case of deep learning, the model does it by itself. Automatic car driving system is a good example of deep learning. On other hand, Artificial Intelligence is a different thing from machine learning and deep learning. Deep learning and machine learning both are the subsets of AI.


Sign in / Sign up

Export Citation Format

Share Document